Oct 11 10:26:05.017216 master-2 systemd[1]: Starting Kubernetes Kubelet... Oct 11 10:26:05.705060 master-2 kubenswrapper[4776]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:26:05.705060 master-2 kubenswrapper[4776]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 11 10:26:05.705060 master-2 kubenswrapper[4776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:26:05.705060 master-2 kubenswrapper[4776]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:26:05.705060 master-2 kubenswrapper[4776]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 11 10:26:05.705060 master-2 kubenswrapper[4776]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 10:26:05.706386 master-2 kubenswrapper[4776]: I1011 10:26:05.706043 4776 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 11 10:26:05.717953 master-2 kubenswrapper[4776]: W1011 10:26:05.717876 4776 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:26:05.717953 master-2 kubenswrapper[4776]: W1011 10:26:05.717917 4776 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:26:05.717953 master-2 kubenswrapper[4776]: W1011 10:26:05.717930 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:26:05.717953 master-2 kubenswrapper[4776]: W1011 10:26:05.717950 4776 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:26:05.717953 master-2 kubenswrapper[4776]: W1011 10:26:05.717962 4776 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.717976 4776 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.717988 4776 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.717999 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718010 4776 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718020 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718030 4776 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718044 4776 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718057 4776 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718069 4776 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718079 4776 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718089 4776 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718102 4776 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718113 4776 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718126 4776 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718136 4776 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718146 4776 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718159 4776 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718172 4776 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:26:05.718221 master-2 kubenswrapper[4776]: W1011 10:26:05.718184 4776 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718195 4776 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718205 4776 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718219 4776 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718233 4776 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718244 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718256 4776 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718267 4776 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718277 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718289 4776 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718302 4776 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718312 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718322 4776 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718331 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718341 4776 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718351 4776 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718361 4776 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718371 4776 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718381 4776 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718390 4776 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:26:05.719096 master-2 kubenswrapper[4776]: W1011 10:26:05.718400 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718410 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718419 4776 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718429 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718439 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718452 4776 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718464 4776 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718475 4776 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718487 4776 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718498 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718508 4776 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718517 4776 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718524 4776 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718532 4776 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718542 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718550 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718557 4776 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718565 4776 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718572 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:26:05.719982 master-2 kubenswrapper[4776]: W1011 10:26:05.718580 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718588 4776 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718597 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718606 4776 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718614 4776 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718621 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718630 4776 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718638 4776 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718645 4776 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: W1011 10:26:05.718653 4776 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719646 4776 flags.go:64] FLAG: --address="0.0.0.0" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719704 4776 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719723 4776 flags.go:64] FLAG: --anonymous-auth="true" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719742 4776 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719753 4776 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719762 4776 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719775 4776 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719787 4776 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719796 4776 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719805 4776 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719815 4776 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719824 4776 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 11 10:26:05.720868 master-2 kubenswrapper[4776]: I1011 10:26:05.719834 4776 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719843 4776 flags.go:64] FLAG: --cgroup-root="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719851 4776 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719860 4776 flags.go:64] FLAG: --client-ca-file="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719870 4776 flags.go:64] FLAG: --cloud-config="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719881 4776 flags.go:64] FLAG: --cloud-provider="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719890 4776 flags.go:64] FLAG: --cluster-dns="[]" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719900 4776 flags.go:64] FLAG: --cluster-domain="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719909 4776 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719918 4776 flags.go:64] FLAG: --config-dir="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719927 4776 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719937 4776 flags.go:64] FLAG: --container-log-max-files="5" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719948 4776 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719956 4776 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719965 4776 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719975 4776 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719983 4776 flags.go:64] FLAG: --contention-profiling="false" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.719993 4776 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720002 4776 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720012 4776 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720056 4776 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720068 4776 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720077 4776 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720087 4776 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720095 4776 flags.go:64] FLAG: --enable-load-reader="false" Oct 11 10:26:05.721842 master-2 kubenswrapper[4776]: I1011 10:26:05.720104 4776 flags.go:64] FLAG: --enable-server="true" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720113 4776 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720125 4776 flags.go:64] FLAG: --event-burst="100" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720134 4776 flags.go:64] FLAG: --event-qps="50" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720143 4776 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720152 4776 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720162 4776 flags.go:64] FLAG: --eviction-hard="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720173 4776 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720183 4776 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720195 4776 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720206 4776 flags.go:64] FLAG: --eviction-soft="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720221 4776 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720233 4776 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720245 4776 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720256 4776 flags.go:64] FLAG: --experimental-mounter-path="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720268 4776 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720279 4776 flags.go:64] FLAG: --fail-swap-on="true" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720291 4776 flags.go:64] FLAG: --feature-gates="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720305 4776 flags.go:64] FLAG: --file-check-frequency="20s" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720317 4776 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720328 4776 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720340 4776 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720352 4776 flags.go:64] FLAG: --healthz-port="10248" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720364 4776 flags.go:64] FLAG: --help="false" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720376 4776 flags.go:64] FLAG: --hostname-override="" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720387 4776 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 11 10:26:05.723181 master-2 kubenswrapper[4776]: I1011 10:26:05.720399 4776 flags.go:64] FLAG: --http-check-frequency="20s" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720411 4776 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720422 4776 flags.go:64] FLAG: --image-credential-provider-config="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720433 4776 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720442 4776 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720453 4776 flags.go:64] FLAG: --image-service-endpoint="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720462 4776 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720472 4776 flags.go:64] FLAG: --kube-api-burst="100" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720482 4776 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720491 4776 flags.go:64] FLAG: --kube-api-qps="50" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720500 4776 flags.go:64] FLAG: --kube-reserved="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720509 4776 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720517 4776 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720527 4776 flags.go:64] FLAG: --kubelet-cgroups="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720535 4776 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720544 4776 flags.go:64] FLAG: --lock-file="" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720552 4776 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720562 4776 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720571 4776 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720585 4776 flags.go:64] FLAG: --log-json-split-stream="false" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720593 4776 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720602 4776 flags.go:64] FLAG: --log-text-split-stream="false" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720611 4776 flags.go:64] FLAG: --logging-format="text" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720620 4776 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720629 4776 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 11 10:26:05.724538 master-2 kubenswrapper[4776]: I1011 10:26:05.720638 4776 flags.go:64] FLAG: --manifest-url="" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720647 4776 flags.go:64] FLAG: --manifest-url-header="" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720659 4776 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720668 4776 flags.go:64] FLAG: --max-open-files="1000000" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720710 4776 flags.go:64] FLAG: --max-pods="110" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720720 4776 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720729 4776 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720738 4776 flags.go:64] FLAG: --memory-manager-policy="None" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720746 4776 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720755 4776 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720764 4776 flags.go:64] FLAG: --node-ip="192.168.34.12" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720773 4776 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720793 4776 flags.go:64] FLAG: --node-status-max-images="50" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720802 4776 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720810 4776 flags.go:64] FLAG: --oom-score-adj="-999" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720820 4776 flags.go:64] FLAG: --pod-cidr="" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720833 4776 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d66b9dbe1d071d7372c477a78835fb65b48ea82db00d23e9086af5cfcb194ad" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720850 4776 flags.go:64] FLAG: --pod-manifest-path="" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720861 4776 flags.go:64] FLAG: --pod-max-pids="-1" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720873 4776 flags.go:64] FLAG: --pods-per-core="0" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720885 4776 flags.go:64] FLAG: --port="10250" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720897 4776 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720908 4776 flags.go:64] FLAG: --provider-id="" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720920 4776 flags.go:64] FLAG: --qos-reserved="" Oct 11 10:26:05.725640 master-2 kubenswrapper[4776]: I1011 10:26:05.720932 4776 flags.go:64] FLAG: --read-only-port="10255" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.720943 4776 flags.go:64] FLAG: --register-node="true" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.720955 4776 flags.go:64] FLAG: --register-schedulable="true" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.720967 4776 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.720986 4776 flags.go:64] FLAG: --registry-burst="10" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.720997 4776 flags.go:64] FLAG: --registry-qps="5" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721007 4776 flags.go:64] FLAG: --reserved-cpus="" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721019 4776 flags.go:64] FLAG: --reserved-memory="" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721031 4776 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721043 4776 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721054 4776 flags.go:64] FLAG: --rotate-certificates="false" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721064 4776 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721075 4776 flags.go:64] FLAG: --runonce="false" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721086 4776 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721097 4776 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721109 4776 flags.go:64] FLAG: --seccomp-default="false" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721120 4776 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721131 4776 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721143 4776 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721155 4776 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721167 4776 flags.go:64] FLAG: --storage-driver-password="root" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721178 4776 flags.go:64] FLAG: --storage-driver-secure="false" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721189 4776 flags.go:64] FLAG: --storage-driver-table="stats" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721199 4776 flags.go:64] FLAG: --storage-driver-user="root" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721208 4776 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 11 10:26:05.726740 master-2 kubenswrapper[4776]: I1011 10:26:05.721217 4776 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721227 4776 flags.go:64] FLAG: --system-cgroups="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721235 4776 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721253 4776 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721262 4776 flags.go:64] FLAG: --tls-cert-file="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721271 4776 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721282 4776 flags.go:64] FLAG: --tls-min-version="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721290 4776 flags.go:64] FLAG: --tls-private-key-file="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721298 4776 flags.go:64] FLAG: --topology-manager-policy="none" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721317 4776 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721326 4776 flags.go:64] FLAG: --topology-manager-scope="container" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721335 4776 flags.go:64] FLAG: --v="2" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721347 4776 flags.go:64] FLAG: --version="false" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721358 4776 flags.go:64] FLAG: --vmodule="" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721368 4776 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: I1011 10:26:05.721378 4776 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721580 4776 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721591 4776 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721602 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721610 4776 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721619 4776 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721627 4776 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721634 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:26:05.727828 master-2 kubenswrapper[4776]: W1011 10:26:05.721642 4776 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721651 4776 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721660 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721668 4776 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721708 4776 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721717 4776 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721724 4776 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721732 4776 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721740 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721748 4776 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721755 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721763 4776 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721770 4776 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721778 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721786 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721794 4776 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721803 4776 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721811 4776 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721822 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721830 4776 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:26:05.728902 master-2 kubenswrapper[4776]: W1011 10:26:05.721838 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721846 4776 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721856 4776 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721864 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721872 4776 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721881 4776 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721889 4776 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721897 4776 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721904 4776 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721913 4776 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721920 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721928 4776 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721935 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721946 4776 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721955 4776 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721963 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721971 4776 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721979 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721986 4776 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.721994 4776 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:26:05.729782 master-2 kubenswrapper[4776]: W1011 10:26:05.722002 4776 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722009 4776 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722017 4776 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722024 4776 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722032 4776 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722040 4776 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722047 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722054 4776 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722062 4776 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722070 4776 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722083 4776 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722091 4776 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722100 4776 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722107 4776 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722115 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722123 4776 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722130 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722138 4776 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722146 4776 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722153 4776 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:26:05.730754 master-2 kubenswrapper[4776]: W1011 10:26:05.722161 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:26:05.731637 master-2 kubenswrapper[4776]: W1011 10:26:05.722171 4776 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:26:05.731637 master-2 kubenswrapper[4776]: W1011 10:26:05.722182 4776 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:26:05.731637 master-2 kubenswrapper[4776]: W1011 10:26:05.722191 4776 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:26:05.731637 master-2 kubenswrapper[4776]: W1011 10:26:05.722203 4776 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:26:05.731637 master-2 kubenswrapper[4776]: I1011 10:26:05.724811 4776 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 10:26:05.738998 master-2 kubenswrapper[4776]: I1011 10:26:05.738926 4776 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Oct 11 10:26:05.738998 master-2 kubenswrapper[4776]: I1011 10:26:05.738987 4776 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 11 10:26:05.739198 master-2 kubenswrapper[4776]: W1011 10:26:05.739156 4776 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:26:05.739198 master-2 kubenswrapper[4776]: W1011 10:26:05.739187 4776 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739200 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739214 4776 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739227 4776 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739243 4776 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739263 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739272 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739283 4776 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739292 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739301 4776 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:26:05.739300 master-2 kubenswrapper[4776]: W1011 10:26:05.739313 4776 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739325 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739336 4776 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739347 4776 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739356 4776 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739365 4776 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739374 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739383 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739393 4776 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739405 4776 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739414 4776 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739424 4776 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739435 4776 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739446 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739454 4776 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739462 4776 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739470 4776 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739481 4776 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739493 4776 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:26:05.739965 master-2 kubenswrapper[4776]: W1011 10:26:05.739502 4776 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739511 4776 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739519 4776 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739527 4776 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739536 4776 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739545 4776 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739553 4776 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739561 4776 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739570 4776 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739578 4776 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739586 4776 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739594 4776 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739602 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739611 4776 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739619 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739630 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739638 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739647 4776 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739655 4776 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739663 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:26:05.741066 master-2 kubenswrapper[4776]: W1011 10:26:05.739671 4776 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739715 4776 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739726 4776 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739737 4776 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739747 4776 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739759 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739769 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739778 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739787 4776 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739795 4776 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739804 4776 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739812 4776 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739820 4776 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739829 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739837 4776 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739845 4776 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739854 4776 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739862 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739870 4776 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739878 4776 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:26:05.742300 master-2 kubenswrapper[4776]: W1011 10:26:05.739886 4776 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.739895 4776 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: I1011 10:26:05.739910 4776 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741488 4776 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741520 4776 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741535 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741558 4776 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741572 4776 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741583 4776 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741594 4776 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741606 4776 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741618 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741629 4776 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741640 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741653 4776 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741665 4776 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 10:26:05.743218 master-2 kubenswrapper[4776]: W1011 10:26:05.741707 4776 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741719 4776 feature_gate.go:330] unrecognized feature gate: Example Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741730 4776 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741752 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741762 4776 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741778 4776 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741796 4776 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741807 4776 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741818 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741829 4776 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741840 4776 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741850 4776 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741861 4776 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741871 4776 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741881 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741901 4776 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741912 4776 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741922 4776 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741933 4776 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741943 4776 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 10:26:05.743968 master-2 kubenswrapper[4776]: W1011 10:26:05.741953 4776 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.741964 4776 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.741974 4776 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.741984 4776 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.741994 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742005 4776 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742015 4776 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742028 4776 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742049 4776 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742061 4776 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742075 4776 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742088 4776 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742173 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742183 4776 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742193 4776 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742205 4776 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742551 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742577 4776 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742586 4776 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742595 4776 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 10:26:05.745221 master-2 kubenswrapper[4776]: W1011 10:26:05.742603 4776 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742612 4776 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742621 4776 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742629 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742637 4776 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742652 4776 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742666 4776 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742711 4776 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742720 4776 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742729 4776 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742737 4776 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742746 4776 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742755 4776 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742764 4776 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742776 4776 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742787 4776 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742798 4776 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742807 4776 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 10:26:05.746167 master-2 kubenswrapper[4776]: W1011 10:26:05.742819 4776 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 10:26:05.747125 master-2 kubenswrapper[4776]: I1011 10:26:05.742834 4776 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 10:26:05.747125 master-2 kubenswrapper[4776]: I1011 10:26:05.744278 4776 server.go:940] "Client rotation is on, will bootstrap in background" Oct 11 10:26:05.749159 master-2 kubenswrapper[4776]: I1011 10:26:05.749106 4776 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Oct 11 10:26:05.752216 master-2 kubenswrapper[4776]: I1011 10:26:05.752167 4776 server.go:997] "Starting client certificate rotation" Oct 11 10:26:05.753027 master-2 kubenswrapper[4776]: I1011 10:26:05.752984 4776 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 11 10:26:05.753301 master-2 kubenswrapper[4776]: I1011 10:26:05.753216 4776 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Oct 11 10:26:05.782297 master-2 kubenswrapper[4776]: I1011 10:26:05.782207 4776 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 10:26:05.786190 master-2 kubenswrapper[4776]: I1011 10:26:05.786121 4776 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 10:26:05.803281 master-2 kubenswrapper[4776]: I1011 10:26:05.803186 4776 log.go:25] "Validated CRI v1 runtime API" Oct 11 10:26:05.809772 master-2 kubenswrapper[4776]: I1011 10:26:05.809714 4776 log.go:25] "Validated CRI v1 image API" Oct 11 10:26:05.811955 master-2 kubenswrapper[4776]: I1011 10:26:05.811894 4776 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 11 10:26:05.818861 master-2 kubenswrapper[4776]: I1011 10:26:05.818802 4776 fs.go:135] Filesystem UUIDs: map[76af800b-3127-4b99-b103-7e68794afee3:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Oct 11 10:26:05.819004 master-2 kubenswrapper[4776]: I1011 10:26:05.818848 4776 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Oct 11 10:26:05.832019 master-2 kubenswrapper[4776]: I1011 10:26:05.831934 4776 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Oct 11 10:26:05.854919 master-2 kubenswrapper[4776]: I1011 10:26:05.854264 4776 manager.go:217] Machine: {Timestamp:2025-10-11 10:26:05.85088475 +0000 UTC m=+0.635311539 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514149376 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:5bc5ca53875847afb260297b16f63643 SystemUUID:5bc5ca53-8758-47af-b260-297b16f63643 BootID:7aa46b32-7bb5-4c5a-8660-726bba203ff5 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:91:4f:09 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:3e:91:4f:09 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:e1:55:c5 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:40:77:bf Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:86:fb:14:35:af:42 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514149376 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 11 10:26:05.854919 master-2 kubenswrapper[4776]: I1011 10:26:05.854806 4776 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 11 10:26:05.855214 master-2 kubenswrapper[4776]: I1011 10:26:05.855039 4776 manager.go:233] Version: {KernelVersion:5.14.0-427.91.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202509241235-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 11 10:26:05.855581 master-2 kubenswrapper[4776]: I1011 10:26:05.855536 4776 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 11 10:26:05.856867 master-2 kubenswrapper[4776]: I1011 10:26:05.856787 4776 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 11 10:26:05.857237 master-2 kubenswrapper[4776]: I1011 10:26:05.856871 4776 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-2","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 11 10:26:05.857296 master-2 kubenswrapper[4776]: I1011 10:26:05.857236 4776 topology_manager.go:138] "Creating topology manager with none policy" Oct 11 10:26:05.857296 master-2 kubenswrapper[4776]: I1011 10:26:05.857258 4776 container_manager_linux.go:303] "Creating device plugin manager" Oct 11 10:26:05.857296 master-2 kubenswrapper[4776]: I1011 10:26:05.857285 4776 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 11 10:26:05.857406 master-2 kubenswrapper[4776]: I1011 10:26:05.857314 4776 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 11 10:26:05.858529 master-2 kubenswrapper[4776]: I1011 10:26:05.858493 4776 state_mem.go:36] "Initialized new in-memory state store" Oct 11 10:26:05.858703 master-2 kubenswrapper[4776]: I1011 10:26:05.858646 4776 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 11 10:26:05.865043 master-2 kubenswrapper[4776]: I1011 10:26:05.864986 4776 kubelet.go:418] "Attempting to sync node with API server" Oct 11 10:26:05.865043 master-2 kubenswrapper[4776]: I1011 10:26:05.865023 4776 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 11 10:26:05.865233 master-2 kubenswrapper[4776]: I1011 10:26:05.865092 4776 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 11 10:26:05.865233 master-2 kubenswrapper[4776]: I1011 10:26:05.865118 4776 kubelet.go:324] "Adding apiserver pod source" Oct 11 10:26:05.865233 master-2 kubenswrapper[4776]: I1011 10:26:05.865140 4776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 11 10:26:05.869864 master-2 kubenswrapper[4776]: I1011 10:26:05.869812 4776 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.12-3.rhaos4.18.gitdc59c78.el9" apiVersion="v1" Oct 11 10:26:05.873531 master-2 kubenswrapper[4776]: I1011 10:26:05.873472 4776 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 11 10:26:05.873749 master-2 kubenswrapper[4776]: I1011 10:26:05.873718 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 11 10:26:05.873749 master-2 kubenswrapper[4776]: I1011 10:26:05.873745 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873756 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873767 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873779 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873788 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873799 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873813 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873823 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873833 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 11 10:26:05.873911 master-2 kubenswrapper[4776]: I1011 10:26:05.873857 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 11 10:26:05.874403 master-2 kubenswrapper[4776]: I1011 10:26:05.874370 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 11 10:26:05.875174 master-2 kubenswrapper[4776]: I1011 10:26:05.875138 4776 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 11 10:26:05.875716 master-2 kubenswrapper[4776]: I1011 10:26:05.875650 4776 server.go:1280] "Started kubelet" Oct 11 10:26:05.876219 master-2 kubenswrapper[4776]: I1011 10:26:05.876068 4776 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 11 10:26:05.876661 master-2 kubenswrapper[4776]: I1011 10:26:05.876024 4776 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 11 10:26:05.876661 master-2 kubenswrapper[4776]: I1011 10:26:05.876464 4776 server_v1.go:47] "podresources" method="list" useActivePods=true Oct 11 10:26:05.877604 master-2 kubenswrapper[4776]: I1011 10:26:05.877533 4776 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 11 10:26:05.878132 master-2 systemd[1]: Started Kubernetes Kubelet. Oct 11 10:26:05.878906 master-2 kubenswrapper[4776]: W1011 10:26:05.878561 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-2" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:26:05.878906 master-2 kubenswrapper[4776]: I1011 10:26:05.878779 4776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 11 10:26:05.878906 master-2 kubenswrapper[4776]: W1011 10:26:05.878782 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:26:05.878906 master-2 kubenswrapper[4776]: E1011 10:26:05.878794 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-2\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:05.878906 master-2 kubenswrapper[4776]: E1011 10:26:05.878849 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:05.878906 master-2 kubenswrapper[4776]: I1011 10:26:05.878820 4776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 11 10:26:05.879172 master-2 kubenswrapper[4776]: E1011 10:26:05.878995 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:05.879172 master-2 kubenswrapper[4776]: I1011 10:26:05.879044 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:05.879172 master-2 kubenswrapper[4776]: I1011 10:26:05.879098 4776 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 11 10:26:05.879381 master-2 kubenswrapper[4776]: I1011 10:26:05.879217 4776 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Oct 11 10:26:05.879381 master-2 kubenswrapper[4776]: I1011 10:26:05.879227 4776 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 11 10:26:05.879543 master-2 kubenswrapper[4776]: I1011 10:26:05.879385 4776 reconstruct.go:97] "Volume reconstruction finished" Oct 11 10:26:05.879543 master-2 kubenswrapper[4776]: I1011 10:26:05.879402 4776 reconciler.go:26] "Reconciler: start to sync state" Oct 11 10:26:05.881249 master-2 kubenswrapper[4776]: I1011 10:26:05.881108 4776 server.go:449] "Adding debug handlers to kubelet server" Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.883909 4776 factory.go:153] Registering CRI-O factory Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.883938 4776 factory.go:221] Registration of the crio container factory successfully Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.884059 4776 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.884071 4776 factory.go:55] Registering systemd factory Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.884079 4776 factory.go:221] Registration of the systemd container factory successfully Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.884131 4776 factory.go:103] Registering Raw factory Oct 11 10:26:05.884580 master-2 kubenswrapper[4776]: I1011 10:26:05.884148 4776 manager.go:1196] Started watching for new ooms in manager Oct 11 10:26:05.885331 master-2 kubenswrapper[4776]: I1011 10:26:05.884915 4776 manager.go:319] Starting recovery of all containers Oct 11 10:26:05.899286 master-2 kubenswrapper[4776]: W1011 10:26:05.899204 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:05.899462 master-2 kubenswrapper[4776]: E1011 10:26:05.899300 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:05.899620 master-2 kubenswrapper[4776]: E1011 10:26:05.899552 4776 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 11 10:26:05.902134 master-2 kubenswrapper[4776]: E1011 10:26:05.901942 4776 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Oct 11 10:26:05.914504 master-2 kubenswrapper[4776]: E1011 10:26:05.910951 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5df58100d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.875621901 +0000 UTC m=+0.660048610,LastTimestamp:2025-10-11 10:26:05.875621901 +0000 UTC m=+0.660048610,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:05.920877 master-2 kubenswrapper[4776]: I1011 10:26:05.920840 4776 manager.go:324] Recovery completed Oct 11 10:26:05.932848 master-2 kubenswrapper[4776]: I1011 10:26:05.931906 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:05.934074 master-2 kubenswrapper[4776]: I1011 10:26:05.933968 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:05.934158 master-2 kubenswrapper[4776]: I1011 10:26:05.934083 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:05.934158 master-2 kubenswrapper[4776]: I1011 10:26:05.934101 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:05.935231 master-2 kubenswrapper[4776]: I1011 10:26:05.935179 4776 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 11 10:26:05.935231 master-2 kubenswrapper[4776]: I1011 10:26:05.935213 4776 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 11 10:26:05.935389 master-2 kubenswrapper[4776]: I1011 10:26:05.935267 4776 state_mem.go:36] "Initialized new in-memory state store" Oct 11 10:26:05.938799 master-2 kubenswrapper[4776]: I1011 10:26:05.938749 4776 policy_none.go:49] "None policy: Start" Oct 11 10:26:05.940021 master-2 kubenswrapper[4776]: I1011 10:26:05.939975 4776 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 11 10:26:05.940021 master-2 kubenswrapper[4776]: I1011 10:26:05.940016 4776 state_mem.go:35] "Initializing new in-memory state store" Oct 11 10:26:05.944785 master-2 kubenswrapper[4776]: E1011 10:26:05.944503 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:05.957502 master-2 kubenswrapper[4776]: E1011 10:26:05.957237 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:05.967358 master-2 kubenswrapper[4776]: E1011 10:26:05.966899 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:05.979709 master-2 kubenswrapper[4776]: E1011 10:26:05.979547 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:06.016143 master-2 kubenswrapper[4776]: I1011 10:26:06.016068 4776 manager.go:334] "Starting Device Plugin manager" Oct 11 10:26:06.016312 master-2 kubenswrapper[4776]: I1011 10:26:06.016166 4776 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 11 10:26:06.016312 master-2 kubenswrapper[4776]: I1011 10:26:06.016193 4776 server.go:79] "Starting device plugin registration server" Oct 11 10:26:06.017291 master-2 kubenswrapper[4776]: I1011 10:26:06.017251 4776 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 11 10:26:06.017408 master-2 kubenswrapper[4776]: I1011 10:26:06.017285 4776 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 11 10:26:06.017512 master-2 kubenswrapper[4776]: I1011 10:26:06.017473 4776 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 11 10:26:06.017631 master-2 kubenswrapper[4776]: I1011 10:26:06.017605 4776 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 11 10:26:06.017631 master-2 kubenswrapper[4776]: I1011 10:26:06.017623 4776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 11 10:26:06.019567 master-2 kubenswrapper[4776]: E1011 10:26:06.019487 4776 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-2\" not found" Oct 11 10:26:06.030930 master-2 kubenswrapper[4776]: E1011 10:26:06.030765 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e7e006ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:06.018750154 +0000 UTC m=+0.803176903,LastTimestamp:2025-10-11 10:26:06.018750154 +0000 UTC m=+0.803176903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.055339 master-2 kubenswrapper[4776]: I1011 10:26:06.055237 4776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 11 10:26:06.057290 master-2 kubenswrapper[4776]: I1011 10:26:06.057216 4776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 11 10:26:06.057290 master-2 kubenswrapper[4776]: I1011 10:26:06.057268 4776 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 11 10:26:06.057290 master-2 kubenswrapper[4776]: I1011 10:26:06.057289 4776 kubelet.go:2335] "Starting kubelet main sync loop" Oct 11 10:26:06.057608 master-2 kubenswrapper[4776]: E1011 10:26:06.057330 4776 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 11 10:26:06.066435 master-2 kubenswrapper[4776]: W1011 10:26:06.066350 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:26:06.066435 master-2 kubenswrapper[4776]: E1011 10:26:06.066405 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:06.108912 master-2 kubenswrapper[4776]: E1011 10:26:06.108793 4776 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 11 10:26:06.117941 master-2 kubenswrapper[4776]: I1011 10:26:06.117881 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:06.119072 master-2 kubenswrapper[4776]: I1011 10:26:06.119021 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:06.119072 master-2 kubenswrapper[4776]: I1011 10:26:06.119066 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:06.119072 master-2 kubenswrapper[4776]: I1011 10:26:06.119078 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:06.119329 master-2 kubenswrapper[4776]: I1011 10:26:06.119109 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:06.129302 master-2 kubenswrapper[4776]: E1011 10:26:06.129133 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:06.119051552 +0000 UTC m=+0.903478271,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.129488 master-2 kubenswrapper[4776]: E1011 10:26:06.129200 4776 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-2" Oct 11 10:26:06.138534 master-2 kubenswrapper[4776]: E1011 10:26:06.138388 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:06.119073711 +0000 UTC m=+0.903500440,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.147714 master-2 kubenswrapper[4776]: E1011 10:26:06.147436 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d48761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:06.119084461 +0000 UTC m=+0.903511180,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.158556 master-2 kubenswrapper[4776]: I1011 10:26:06.158476 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-2"] Oct 11 10:26:06.158667 master-2 kubenswrapper[4776]: I1011 10:26:06.158608 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:06.159824 master-2 kubenswrapper[4776]: I1011 10:26:06.159754 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:06.159921 master-2 kubenswrapper[4776]: I1011 10:26:06.159861 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:06.159921 master-2 kubenswrapper[4776]: I1011 10:26:06.159888 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:06.160634 master-2 kubenswrapper[4776]: I1011 10:26:06.160543 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.160634 master-2 kubenswrapper[4776]: I1011 10:26:06.160623 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:06.162238 master-2 kubenswrapper[4776]: I1011 10:26:06.162161 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:06.162238 master-2 kubenswrapper[4776]: I1011 10:26:06.162203 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:06.162238 master-2 kubenswrapper[4776]: I1011 10:26:06.162220 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:06.167718 master-2 kubenswrapper[4776]: E1011 10:26:06.167468 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:06.159827412 +0000 UTC m=+0.944254161,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.175249 master-2 kubenswrapper[4776]: E1011 10:26:06.175104 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:06.15987773 +0000 UTC m=+0.944304489,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.182958 master-2 kubenswrapper[4776]: E1011 10:26:06.182793 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d48761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:06.159902499 +0000 UTC m=+0.944329288,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.191239 master-2 kubenswrapper[4776]: E1011 10:26:06.191039 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:06.162190545 +0000 UTC m=+0.946617264,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.199813 master-2 kubenswrapper[4776]: E1011 10:26:06.199614 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:06.162213824 +0000 UTC m=+0.946640543,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.208070 master-2 kubenswrapper[4776]: E1011 10:26:06.207903 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d48761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:06.162229053 +0000 UTC m=+0.946655772,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.280506 master-2 kubenswrapper[4776]: I1011 10:26:06.280423 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f022eff2d978fee6b366ac18a80aa53c-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-2\" (UID: \"f022eff2d978fee6b366ac18a80aa53c\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.280774 master-2 kubenswrapper[4776]: I1011 10:26:06.280513 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/f022eff2d978fee6b366ac18a80aa53c-etc-kube\") pod \"kube-rbac-proxy-crio-master-2\" (UID: \"f022eff2d978fee6b366ac18a80aa53c\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.330114 master-2 kubenswrapper[4776]: I1011 10:26:06.329995 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:06.331503 master-2 kubenswrapper[4776]: I1011 10:26:06.331444 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:06.331772 master-2 kubenswrapper[4776]: I1011 10:26:06.331535 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:06.331772 master-2 kubenswrapper[4776]: I1011 10:26:06.331559 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:06.331772 master-2 kubenswrapper[4776]: I1011 10:26:06.331626 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:06.340056 master-2 kubenswrapper[4776]: E1011 10:26:06.339930 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:06.331507909 +0000 UTC m=+1.115934658,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.340191 master-2 kubenswrapper[4776]: E1011 10:26:06.340110 4776 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-2" Oct 11 10:26:06.342469 master-2 kubenswrapper[4776]: E1011 10:26:06.342314 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:06.331550439 +0000 UTC m=+1.115977188,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.352385 master-2 kubenswrapper[4776]: E1011 10:26:06.352235 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d48761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:06.331570748 +0000 UTC m=+1.115997497,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.381314 master-2 kubenswrapper[4776]: I1011 10:26:06.381162 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/f022eff2d978fee6b366ac18a80aa53c-etc-kube\") pod \"kube-rbac-proxy-crio-master-2\" (UID: \"f022eff2d978fee6b366ac18a80aa53c\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.381314 master-2 kubenswrapper[4776]: I1011 10:26:06.381315 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f022eff2d978fee6b366ac18a80aa53c-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-2\" (UID: \"f022eff2d978fee6b366ac18a80aa53c\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.381735 master-2 kubenswrapper[4776]: I1011 10:26:06.381399 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f022eff2d978fee6b366ac18a80aa53c-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-2\" (UID: \"f022eff2d978fee6b366ac18a80aa53c\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.381735 master-2 kubenswrapper[4776]: I1011 10:26:06.381506 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/f022eff2d978fee6b366ac18a80aa53c-etc-kube\") pod \"kube-rbac-proxy-crio-master-2\" (UID: \"f022eff2d978fee6b366ac18a80aa53c\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.487613 master-2 kubenswrapper[4776]: I1011 10:26:06.487365 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" Oct 11 10:26:06.521132 master-2 kubenswrapper[4776]: E1011 10:26:06.521010 4776 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 11 10:26:06.741395 master-2 kubenswrapper[4776]: I1011 10:26:06.741212 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:06.742407 master-2 kubenswrapper[4776]: I1011 10:26:06.742375 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:06.742407 master-2 kubenswrapper[4776]: I1011 10:26:06.742406 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:06.742476 master-2 kubenswrapper[4776]: I1011 10:26:06.742415 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:06.742476 master-2 kubenswrapper[4776]: I1011 10:26:06.742441 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:06.753986 master-2 kubenswrapper[4776]: E1011 10:26:06.753937 4776 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-2" Oct 11 10:26:06.754135 master-2 kubenswrapper[4776]: E1011 10:26:06.753995 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:06.742396025 +0000 UTC m=+1.526822734,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.762071 master-2 kubenswrapper[4776]: E1011 10:26:06.761949 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:06.742412231 +0000 UTC m=+1.526838940,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.771920 master-2 kubenswrapper[4776]: E1011 10:26:06.771661 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d48761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:06.742420464 +0000 UTC m=+1.526847173,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:06.818535 master-2 kubenswrapper[4776]: W1011 10:26:06.818428 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:06.818535 master-2 kubenswrapper[4776]: E1011 10:26:06.818514 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:06.888660 master-2 kubenswrapper[4776]: I1011 10:26:06.888561 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:07.016203 master-2 kubenswrapper[4776]: W1011 10:26:07.016038 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-2" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:26:07.016203 master-2 kubenswrapper[4776]: E1011 10:26:07.016103 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-2\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:07.183761 master-2 kubenswrapper[4776]: W1011 10:26:07.183657 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:26:07.183937 master-2 kubenswrapper[4776]: E1011 10:26:07.183776 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:07.252126 master-2 kubenswrapper[4776]: W1011 10:26:07.251543 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf022eff2d978fee6b366ac18a80aa53c.slice/crio-e541a5c6b0b8b187d872a1b29da26af99ffdeead9c703d25dc4e829a8cc73947 WatchSource:0}: Error finding container e541a5c6b0b8b187d872a1b29da26af99ffdeead9c703d25dc4e829a8cc73947: Status 404 returned error can't find the container with id e541a5c6b0b8b187d872a1b29da26af99ffdeead9c703d25dc4e829a8cc73947 Oct 11 10:26:07.258775 master-2 kubenswrapper[4776]: I1011 10:26:07.258727 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:26:07.271309 master-2 kubenswrapper[4776]: E1011 10:26:07.271010 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e631c68f1b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\",Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:07.258595099 +0000 UTC m=+2.043021818,LastTimestamp:2025-10-11 10:26:07.258595099 +0000 UTC m=+2.043021818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:07.310900 master-2 kubenswrapper[4776]: W1011 10:26:07.310819 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:26:07.310900 master-2 kubenswrapper[4776]: E1011 10:26:07.310894 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:07.330112 master-2 kubenswrapper[4776]: E1011 10:26:07.330038 4776 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Oct 11 10:26:07.554971 master-2 kubenswrapper[4776]: I1011 10:26:07.554753 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:07.556854 master-2 kubenswrapper[4776]: I1011 10:26:07.556799 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:07.556996 master-2 kubenswrapper[4776]: I1011 10:26:07.556873 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:07.556996 master-2 kubenswrapper[4776]: I1011 10:26:07.556927 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:07.556996 master-2 kubenswrapper[4776]: I1011 10:26:07.556983 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:07.568460 master-2 kubenswrapper[4776]: E1011 10:26:07.568381 4776 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-2" Oct 11 10:26:07.568574 master-2 kubenswrapper[4776]: E1011 10:26:07.568382 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:07.556848208 +0000 UTC m=+2.341274957,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:07.578011 master-2 kubenswrapper[4776]: E1011 10:26:07.577893 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:07.556885617 +0000 UTC m=+2.341312356,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:07.587622 master-2 kubenswrapper[4776]: E1011 10:26:07.587499 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d48761\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d48761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-2 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934110561 +0000 UTC m=+0.718537280,LastTimestamp:2025-10-11 10:26:07.556937328 +0000 UTC m=+2.341364067,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:07.892315 master-2 kubenswrapper[4776]: I1011 10:26:07.892110 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:08.064705 master-2 kubenswrapper[4776]: I1011 10:26:08.064590 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" event={"ID":"f022eff2d978fee6b366ac18a80aa53c","Type":"ContainerStarted","Data":"e541a5c6b0b8b187d872a1b29da26af99ffdeead9c703d25dc4e829a8cc73947"} Oct 11 10:26:08.548755 master-2 kubenswrapper[4776]: W1011 10:26:08.548660 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:08.548755 master-2 kubenswrapper[4776]: E1011 10:26:08.548754 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:08.809342 master-2 kubenswrapper[4776]: W1011 10:26:08.809240 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:26:08.809342 master-2 kubenswrapper[4776]: E1011 10:26:08.809291 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:08.888590 master-2 kubenswrapper[4776]: I1011 10:26:08.888538 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:08.940156 master-2 kubenswrapper[4776]: E1011 10:26:08.940100 4776 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Oct 11 10:26:09.044754 master-2 kubenswrapper[4776]: E1011 10:26:09.044520 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e69b93cc13 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\" in 1.775s (1.775s including waiting). Image size: 458126368 bytes.,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:09.033653267 +0000 UTC m=+3.818079996,LastTimestamp:2025-10-11 10:26:09.033653267 +0000 UTC m=+3.818079996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:09.169274 master-2 kubenswrapper[4776]: I1011 10:26:09.169162 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:09.170868 master-2 kubenswrapper[4776]: I1011 10:26:09.170807 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:09.170868 master-2 kubenswrapper[4776]: I1011 10:26:09.170868 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:09.171014 master-2 kubenswrapper[4776]: I1011 10:26:09.170889 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:09.171014 master-2 kubenswrapper[4776]: I1011 10:26:09.170932 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:09.183438 master-2 kubenswrapper[4776]: E1011 10:26:09.183379 4776 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-2" Oct 11 10:26:09.183559 master-2 kubenswrapper[4776]: E1011 10:26:09.183381 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d3ba3b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d3ba3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934058043 +0000 UTC m=+0.718484762,LastTimestamp:2025-10-11 10:26:09.17084912 +0000 UTC m=+3.955275869,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:09.191530 master-2 kubenswrapper[4776]: E1011 10:26:09.191378 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"master-2.186d68e5e2d44d92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-2.186d68e5e2d44d92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-2,UID:master-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:05.934095762 +0000 UTC m=+0.718522491,LastTimestamp:2025-10-11 10:26:09.170880181 +0000 UTC m=+3.955306930,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:09.314872 master-2 kubenswrapper[4776]: E1011 10:26:09.314573 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6aba86340 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:09.303438144 +0000 UTC m=+4.087864853,LastTimestamp:2025-10-11 10:26:09.303438144 +0000 UTC m=+4.087864853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:09.331751 master-2 kubenswrapper[4776]: E1011 10:26:09.331553 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6acc32bc5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:09.321970629 +0000 UTC m=+4.106397348,LastTimestamp:2025-10-11 10:26:09.321970629 +0000 UTC m=+4.106397348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:09.527942 master-2 kubenswrapper[4776]: W1011 10:26:09.527652 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:26:09.527942 master-2 kubenswrapper[4776]: E1011 10:26:09.527851 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:09.775049 master-2 kubenswrapper[4776]: W1011 10:26:09.774904 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-2" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:26:09.775049 master-2 kubenswrapper[4776]: E1011 10:26:09.775024 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-2\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:09.892722 master-2 kubenswrapper[4776]: I1011 10:26:09.892460 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:10.072916 master-2 kubenswrapper[4776]: I1011 10:26:10.072770 4776 generic.go:334] "Generic (PLEG): container finished" podID="f022eff2d978fee6b366ac18a80aa53c" containerID="45989505ea87eb5d207184ab8cca1a7ff41c0ae043eba001621e60253585d1e0" exitCode=0 Oct 11 10:26:10.072916 master-2 kubenswrapper[4776]: I1011 10:26:10.072857 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" event={"ID":"f022eff2d978fee6b366ac18a80aa53c","Type":"ContainerDied","Data":"45989505ea87eb5d207184ab8cca1a7ff41c0ae043eba001621e60253585d1e0"} Oct 11 10:26:10.073807 master-2 kubenswrapper[4776]: I1011 10:26:10.072967 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:10.074739 master-2 kubenswrapper[4776]: I1011 10:26:10.074642 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:10.074858 master-2 kubenswrapper[4776]: I1011 10:26:10.074766 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:10.074858 master-2 kubenswrapper[4776]: I1011 10:26:10.074796 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:10.105409 master-2 kubenswrapper[4776]: E1011 10:26:10.105174 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6dabf9ac6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\" already present on machine,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:10.093488838 +0000 UTC m=+4.877915577,LastTimestamp:2025-10-11 10:26:10.093488838 +0000 UTC m=+4.877915577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:10.380528 master-2 kubenswrapper[4776]: E1011 10:26:10.380283 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6eb293eb8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:10.368847544 +0000 UTC m=+5.153274283,LastTimestamp:2025-10-11 10:26:10.368847544 +0000 UTC m=+5.153274283,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:10.399086 master-2 kubenswrapper[4776]: E1011 10:26:10.398844 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6ec577694 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:10.388653716 +0000 UTC m=+5.173080455,LastTimestamp:2025-10-11 10:26:10.388653716 +0000 UTC m=+5.173080455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:10.890429 master-2 kubenswrapper[4776]: I1011 10:26:10.890340 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:11.080314 master-2 kubenswrapper[4776]: I1011 10:26:11.080220 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/0.log" Oct 11 10:26:11.081604 master-2 kubenswrapper[4776]: I1011 10:26:11.080791 4776 generic.go:334] "Generic (PLEG): container finished" podID="f022eff2d978fee6b366ac18a80aa53c" containerID="44e8562db268ea9bb2264d46c564a6e3ff14c4925210392cab4c051e86d86afe" exitCode=1 Oct 11 10:26:11.081604 master-2 kubenswrapper[4776]: I1011 10:26:11.080845 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" event={"ID":"f022eff2d978fee6b366ac18a80aa53c","Type":"ContainerDied","Data":"44e8562db268ea9bb2264d46c564a6e3ff14c4925210392cab4c051e86d86afe"} Oct 11 10:26:11.081604 master-2 kubenswrapper[4776]: I1011 10:26:11.080959 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:11.082042 master-2 kubenswrapper[4776]: I1011 10:26:11.081983 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:11.082042 master-2 kubenswrapper[4776]: I1011 10:26:11.082036 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:11.082234 master-2 kubenswrapper[4776]: I1011 10:26:11.082050 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:11.095243 master-2 kubenswrapper[4776]: I1011 10:26:11.095169 4776 scope.go:117] "RemoveContainer" containerID="44e8562db268ea9bb2264d46c564a6e3ff14c4925210392cab4c051e86d86afe" Oct 11 10:26:11.110142 master-2 kubenswrapper[4776]: E1011 10:26:11.109970 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-2.186d68e6dabf9ac6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6dabf9ac6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\" already present on machine,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:10.093488838 +0000 UTC m=+4.877915577,LastTimestamp:2025-10-11 10:26:11.099208943 +0000 UTC m=+5.883635692,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:11.340725 master-2 kubenswrapper[4776]: E1011 10:26:11.340527 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-2.186d68e6eb293eb8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6eb293eb8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:10.368847544 +0000 UTC m=+5.153274283,LastTimestamp:2025-10-11 10:26:11.329257892 +0000 UTC m=+6.113684611,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:11.353965 master-2 kubenswrapper[4776]: E1011 10:26:11.353793 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-2.186d68e6ec577694\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e6ec577694 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:10.388653716 +0000 UTC m=+5.173080455,LastTimestamp:2025-10-11 10:26:11.343758019 +0000 UTC m=+6.128184768,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:11.891031 master-2 kubenswrapper[4776]: I1011 10:26:11.890914 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:11.971171 master-2 kubenswrapper[4776]: W1011 10:26:11.971071 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:11.971360 master-2 kubenswrapper[4776]: E1011 10:26:11.971161 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:12.085418 master-2 kubenswrapper[4776]: I1011 10:26:12.085296 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/1.log" Oct 11 10:26:12.086483 master-2 kubenswrapper[4776]: I1011 10:26:12.085941 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/0.log" Oct 11 10:26:12.086584 master-2 kubenswrapper[4776]: I1011 10:26:12.086449 4776 generic.go:334] "Generic (PLEG): container finished" podID="f022eff2d978fee6b366ac18a80aa53c" containerID="8d16240c2844fab823fcb447aab9f50d10cf3af9670757f057c38aa4d5dcc97c" exitCode=1 Oct 11 10:26:12.086584 master-2 kubenswrapper[4776]: I1011 10:26:12.086523 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" event={"ID":"f022eff2d978fee6b366ac18a80aa53c","Type":"ContainerDied","Data":"8d16240c2844fab823fcb447aab9f50d10cf3af9670757f057c38aa4d5dcc97c"} Oct 11 10:26:12.086584 master-2 kubenswrapper[4776]: I1011 10:26:12.086548 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:12.086842 master-2 kubenswrapper[4776]: I1011 10:26:12.086628 4776 scope.go:117] "RemoveContainer" containerID="44e8562db268ea9bb2264d46c564a6e3ff14c4925210392cab4c051e86d86afe" Oct 11 10:26:12.088192 master-2 kubenswrapper[4776]: I1011 10:26:12.088139 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:12.088263 master-2 kubenswrapper[4776]: I1011 10:26:12.088196 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:12.088263 master-2 kubenswrapper[4776]: I1011 10:26:12.088220 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:12.101105 master-2 kubenswrapper[4776]: I1011 10:26:12.101052 4776 scope.go:117] "RemoveContainer" containerID="8d16240c2844fab823fcb447aab9f50d10cf3af9670757f057c38aa4d5dcc97c" Oct 11 10:26:12.101318 master-2 kubenswrapper[4776]: E1011 10:26:12.101271 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" podUID="f022eff2d978fee6b366ac18a80aa53c" Oct 11 10:26:12.111900 master-2 kubenswrapper[4776]: E1011 10:26:12.111646 4776 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e7526b1e3c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c),Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:12.101217852 +0000 UTC m=+6.885644571,LastTimestamp:2025-10-11 10:26:12.101217852 +0000 UTC m=+6.885644571,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:12.149930 master-2 kubenswrapper[4776]: E1011 10:26:12.149807 4776 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Oct 11 10:26:12.384181 master-2 kubenswrapper[4776]: I1011 10:26:12.384093 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:12.429222 master-2 kubenswrapper[4776]: I1011 10:26:12.429160 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:12.429222 master-2 kubenswrapper[4776]: I1011 10:26:12.429219 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:12.429222 master-2 kubenswrapper[4776]: I1011 10:26:12.429234 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:12.429463 master-2 kubenswrapper[4776]: I1011 10:26:12.429264 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:12.439910 master-2 kubenswrapper[4776]: E1011 10:26:12.439854 4776 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-2" Oct 11 10:26:12.483980 master-2 kubenswrapper[4776]: W1011 10:26:12.483913 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 11 10:26:12.483980 master-2 kubenswrapper[4776]: E1011 10:26:12.483964 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:12.809490 master-2 kubenswrapper[4776]: W1011 10:26:12.809338 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 11 10:26:12.809490 master-2 kubenswrapper[4776]: E1011 10:26:12.809394 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:12.888386 master-2 kubenswrapper[4776]: I1011 10:26:12.888296 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:13.090637 master-2 kubenswrapper[4776]: I1011 10:26:13.090490 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/1.log" Oct 11 10:26:13.091276 master-2 kubenswrapper[4776]: I1011 10:26:13.091243 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:13.092069 master-2 kubenswrapper[4776]: I1011 10:26:13.092017 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:13.092069 master-2 kubenswrapper[4776]: I1011 10:26:13.092063 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:13.092178 master-2 kubenswrapper[4776]: I1011 10:26:13.092075 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:13.092414 master-2 kubenswrapper[4776]: I1011 10:26:13.092383 4776 scope.go:117] "RemoveContainer" containerID="8d16240c2844fab823fcb447aab9f50d10cf3af9670757f057c38aa4d5dcc97c" Oct 11 10:26:13.093076 master-2 kubenswrapper[4776]: E1011 10:26:13.092532 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" podUID="f022eff2d978fee6b366ac18a80aa53c" Oct 11 10:26:13.103304 master-2 kubenswrapper[4776]: E1011 10:26:13.103185 4776 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-2.186d68e7526b1e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-2.186d68e7526b1e3c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-2,UID:f022eff2d978fee6b366ac18a80aa53c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c),Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:26:12.101217852 +0000 UTC m=+6.885644571,LastTimestamp:2025-10-11 10:26:13.09251174 +0000 UTC m=+7.876938459,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:26:13.891059 master-2 kubenswrapper[4776]: I1011 10:26:13.890985 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:14.250661 master-2 kubenswrapper[4776]: W1011 10:26:14.250598 4776 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-2" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 11 10:26:14.251275 master-2 kubenswrapper[4776]: E1011 10:26:14.250667 4776 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-2\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 11 10:26:14.889483 master-2 kubenswrapper[4776]: I1011 10:26:14.889420 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:15.890479 master-2 kubenswrapper[4776]: I1011 10:26:15.890432 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:16.019803 master-2 kubenswrapper[4776]: E1011 10:26:16.019709 4776 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-2\" not found" Oct 11 10:26:16.886532 master-2 kubenswrapper[4776]: I1011 10:26:16.886491 4776 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-2" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 11 10:26:16.904418 master-2 kubenswrapper[4776]: I1011 10:26:16.904365 4776 csr.go:261] certificate signing request csr-nrqbw is approved, waiting to be issued Oct 11 10:26:16.912857 master-2 kubenswrapper[4776]: I1011 10:26:16.912811 4776 csr.go:257] certificate signing request csr-nrqbw is issued Oct 11 10:26:17.750978 master-2 kubenswrapper[4776]: I1011 10:26:17.750858 4776 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 11 10:26:17.893187 master-2 kubenswrapper[4776]: I1011 10:26:17.893112 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:17.911176 master-2 kubenswrapper[4776]: I1011 10:26:17.911027 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:17.915482 master-2 kubenswrapper[4776]: I1011 10:26:17.915400 4776 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-10-12 10:21:12 +0000 UTC, rotation deadline is 2025-10-12 03:51:17.11066698 +0000 UTC Oct 11 10:26:17.915482 master-2 kubenswrapper[4776]: I1011 10:26:17.915475 4776 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h24m59.195196514s for next certificate rotation Oct 11 10:26:17.971583 master-2 kubenswrapper[4776]: I1011 10:26:17.971511 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.244117 master-2 kubenswrapper[4776]: I1011 10:26:18.244047 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.244117 master-2 kubenswrapper[4776]: E1011 10:26:18.244096 4776 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-2" not found Oct 11 10:26:18.268226 master-2 kubenswrapper[4776]: I1011 10:26:18.268167 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.286038 master-2 kubenswrapper[4776]: I1011 10:26:18.285999 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.347440 master-2 kubenswrapper[4776]: I1011 10:26:18.347384 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.558368 master-2 kubenswrapper[4776]: E1011 10:26:18.558174 4776 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-2\" not found" node="master-2" Oct 11 10:26:18.621882 master-2 kubenswrapper[4776]: I1011 10:26:18.621799 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.621882 master-2 kubenswrapper[4776]: E1011 10:26:18.621841 4776 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-2" not found Oct 11 10:26:18.724778 master-2 kubenswrapper[4776]: I1011 10:26:18.724712 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.743432 master-2 kubenswrapper[4776]: I1011 10:26:18.743344 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.803509 master-2 kubenswrapper[4776]: I1011 10:26:18.803426 4776 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-2" not found Oct 11 10:26:18.840821 master-2 kubenswrapper[4776]: I1011 10:26:18.840637 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:18.842991 master-2 kubenswrapper[4776]: I1011 10:26:18.842928 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:18.843162 master-2 kubenswrapper[4776]: I1011 10:26:18.842997 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:18.843162 master-2 kubenswrapper[4776]: I1011 10:26:18.843022 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:18.843162 master-2 kubenswrapper[4776]: I1011 10:26:18.843080 4776 kubelet_node_status.go:76] "Attempting to register node" node="master-2" Oct 11 10:26:18.854242 master-2 kubenswrapper[4776]: I1011 10:26:18.854164 4776 kubelet_node_status.go:79] "Successfully registered node" node="master-2" Oct 11 10:26:18.854242 master-2 kubenswrapper[4776]: E1011 10:26:18.854211 4776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-2\": node \"master-2\" not found" Oct 11 10:26:18.866763 master-2 kubenswrapper[4776]: E1011 10:26:18.866662 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:18.882257 master-2 kubenswrapper[4776]: I1011 10:26:18.882152 4776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Oct 11 10:26:18.895069 master-2 kubenswrapper[4776]: I1011 10:26:18.895004 4776 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Oct 11 10:26:18.968176 master-2 kubenswrapper[4776]: E1011 10:26:18.968051 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.068537 master-2 kubenswrapper[4776]: E1011 10:26:19.068437 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.169692 master-2 kubenswrapper[4776]: E1011 10:26:19.169589 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.271305 master-2 kubenswrapper[4776]: E1011 10:26:19.271226 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.372306 master-2 kubenswrapper[4776]: E1011 10:26:19.372217 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.474095 master-2 kubenswrapper[4776]: E1011 10:26:19.473843 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.575281 master-2 kubenswrapper[4776]: E1011 10:26:19.575161 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.677191 master-2 kubenswrapper[4776]: E1011 10:26:19.676985 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.778442 master-2 kubenswrapper[4776]: E1011 10:26:19.778265 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.878818 master-2 kubenswrapper[4776]: E1011 10:26:19.878729 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:19.979559 master-2 kubenswrapper[4776]: E1011 10:26:19.979434 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.079826 master-2 kubenswrapper[4776]: E1011 10:26:20.079605 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.180385 master-2 kubenswrapper[4776]: E1011 10:26:20.180250 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.281431 master-2 kubenswrapper[4776]: E1011 10:26:20.281271 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.382022 master-2 kubenswrapper[4776]: E1011 10:26:20.381794 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.482889 master-2 kubenswrapper[4776]: E1011 10:26:20.482788 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.583212 master-2 kubenswrapper[4776]: E1011 10:26:20.583108 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.597405 master-2 kubenswrapper[4776]: I1011 10:26:20.597246 4776 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 11 10:26:20.684335 master-2 kubenswrapper[4776]: E1011 10:26:20.684229 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.784582 master-2 kubenswrapper[4776]: E1011 10:26:20.784517 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.885250 master-2 kubenswrapper[4776]: E1011 10:26:20.885192 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:20.985980 master-2 kubenswrapper[4776]: E1011 10:26:20.985784 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.086891 master-2 kubenswrapper[4776]: E1011 10:26:21.086814 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.187773 master-2 kubenswrapper[4776]: E1011 10:26:21.187712 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.288407 master-2 kubenswrapper[4776]: E1011 10:26:21.288178 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.388779 master-2 kubenswrapper[4776]: E1011 10:26:21.388658 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.488975 master-2 kubenswrapper[4776]: E1011 10:26:21.488865 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.590194 master-2 kubenswrapper[4776]: E1011 10:26:21.589983 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.690719 master-2 kubenswrapper[4776]: E1011 10:26:21.690556 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.791370 master-2 kubenswrapper[4776]: E1011 10:26:21.791116 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.892606 master-2 kubenswrapper[4776]: E1011 10:26:21.892370 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:21.993538 master-2 kubenswrapper[4776]: E1011 10:26:21.993446 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.094032 master-2 kubenswrapper[4776]: E1011 10:26:22.093923 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.194731 master-2 kubenswrapper[4776]: E1011 10:26:22.194600 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.294873 master-2 kubenswrapper[4776]: E1011 10:26:22.294751 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.395037 master-2 kubenswrapper[4776]: E1011 10:26:22.394897 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.496004 master-2 kubenswrapper[4776]: E1011 10:26:22.495776 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.596934 master-2 kubenswrapper[4776]: E1011 10:26:22.596819 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.698084 master-2 kubenswrapper[4776]: E1011 10:26:22.697918 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.799360 master-2 kubenswrapper[4776]: E1011 10:26:22.799180 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:22.899972 master-2 kubenswrapper[4776]: E1011 10:26:22.899821 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.001056 master-2 kubenswrapper[4776]: E1011 10:26:23.000905 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.101240 master-2 kubenswrapper[4776]: E1011 10:26:23.101053 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.202021 master-2 kubenswrapper[4776]: E1011 10:26:23.201943 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.302635 master-2 kubenswrapper[4776]: E1011 10:26:23.302557 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.403001 master-2 kubenswrapper[4776]: E1011 10:26:23.402863 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.503306 master-2 kubenswrapper[4776]: E1011 10:26:23.503222 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.603464 master-2 kubenswrapper[4776]: E1011 10:26:23.603338 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.703843 master-2 kubenswrapper[4776]: E1011 10:26:23.703748 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.804846 master-2 kubenswrapper[4776]: E1011 10:26:23.804771 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:23.905510 master-2 kubenswrapper[4776]: E1011 10:26:23.905426 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.006298 master-2 kubenswrapper[4776]: E1011 10:26:24.006130 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.107105 master-2 kubenswrapper[4776]: E1011 10:26:24.107016 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.208236 master-2 kubenswrapper[4776]: E1011 10:26:24.208163 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.309139 master-2 kubenswrapper[4776]: E1011 10:26:24.309036 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.410158 master-2 kubenswrapper[4776]: E1011 10:26:24.410094 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.510626 master-2 kubenswrapper[4776]: E1011 10:26:24.510522 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.611748 master-2 kubenswrapper[4776]: E1011 10:26:24.611477 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.712341 master-2 kubenswrapper[4776]: E1011 10:26:24.712247 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.813306 master-2 kubenswrapper[4776]: E1011 10:26:24.813221 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:24.914452 master-2 kubenswrapper[4776]: E1011 10:26:24.914364 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.014660 master-2 kubenswrapper[4776]: E1011 10:26:25.014505 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.063976 master-2 kubenswrapper[4776]: I1011 10:26:25.063910 4776 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 11 10:26:25.115823 master-2 kubenswrapper[4776]: E1011 10:26:25.115738 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.216544 master-2 kubenswrapper[4776]: E1011 10:26:25.216403 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.317565 master-2 kubenswrapper[4776]: E1011 10:26:25.317494 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.418143 master-2 kubenswrapper[4776]: E1011 10:26:25.418043 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.452599 master-2 kubenswrapper[4776]: I1011 10:26:25.452537 4776 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 11 10:26:25.518570 master-2 kubenswrapper[4776]: E1011 10:26:25.518393 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.619449 master-2 kubenswrapper[4776]: E1011 10:26:25.619327 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.720439 master-2 kubenswrapper[4776]: E1011 10:26:25.720317 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.821031 master-2 kubenswrapper[4776]: E1011 10:26:25.820854 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:25.921420 master-2 kubenswrapper[4776]: E1011 10:26:25.921312 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.020815 master-2 kubenswrapper[4776]: E1011 10:26:26.020728 4776 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-2\" not found" Oct 11 10:26:26.021494 master-2 kubenswrapper[4776]: E1011 10:26:26.021448 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.058302 master-2 kubenswrapper[4776]: I1011 10:26:26.058222 4776 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 10:26:26.059278 master-2 kubenswrapper[4776]: I1011 10:26:26.059248 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:26:26.059332 master-2 kubenswrapper[4776]: I1011 10:26:26.059281 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:26:26.059332 master-2 kubenswrapper[4776]: I1011 10:26:26.059290 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:26:26.059584 master-2 kubenswrapper[4776]: I1011 10:26:26.059557 4776 scope.go:117] "RemoveContainer" containerID="8d16240c2844fab823fcb447aab9f50d10cf3af9670757f057c38aa4d5dcc97c" Oct 11 10:26:26.122607 master-2 kubenswrapper[4776]: E1011 10:26:26.122517 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.223950 master-2 kubenswrapper[4776]: E1011 10:26:26.223588 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.324948 master-2 kubenswrapper[4776]: E1011 10:26:26.324854 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.425259 master-2 kubenswrapper[4776]: E1011 10:26:26.425156 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.525422 master-2 kubenswrapper[4776]: E1011 10:26:26.525339 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.626707 master-2 kubenswrapper[4776]: E1011 10:26:26.626504 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.726956 master-2 kubenswrapper[4776]: E1011 10:26:26.726840 4776 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-2\" not found" Oct 11 10:26:26.758856 master-2 kubenswrapper[4776]: I1011 10:26:26.758769 4776 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 11 10:26:26.879799 master-2 kubenswrapper[4776]: I1011 10:26:26.879642 4776 apiserver.go:52] "Watching apiserver" Oct 11 10:26:26.883363 master-2 kubenswrapper[4776]: I1011 10:26:26.883306 4776 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 11 10:26:26.883507 master-2 kubenswrapper[4776]: I1011 10:26:26.883472 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=[] Oct 11 10:26:26.979631 master-2 kubenswrapper[4776]: I1011 10:26:26.979460 4776 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Oct 11 10:26:27.118397 master-2 kubenswrapper[4776]: I1011 10:26:27.118338 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/2.log" Oct 11 10:26:27.118997 master-2 kubenswrapper[4776]: I1011 10:26:27.118966 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/1.log" Oct 11 10:26:27.119447 master-2 kubenswrapper[4776]: I1011 10:26:27.119411 4776 generic.go:334] "Generic (PLEG): container finished" podID="f022eff2d978fee6b366ac18a80aa53c" containerID="edc999680a89e34d61c0b53a78f91660c99759c03148cd7d4d198e5381947e81" exitCode=1 Oct 11 10:26:27.119506 master-2 kubenswrapper[4776]: I1011 10:26:27.119464 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" event={"ID":"f022eff2d978fee6b366ac18a80aa53c","Type":"ContainerDied","Data":"edc999680a89e34d61c0b53a78f91660c99759c03148cd7d4d198e5381947e81"} Oct 11 10:26:27.119577 master-2 kubenswrapper[4776]: I1011 10:26:27.119548 4776 scope.go:117] "RemoveContainer" containerID="8d16240c2844fab823fcb447aab9f50d10cf3af9670757f057c38aa4d5dcc97c" Oct 11 10:26:27.146538 master-2 kubenswrapper[4776]: I1011 10:26:27.146489 4776 scope.go:117] "RemoveContainer" containerID="edc999680a89e34d61c0b53a78f91660c99759c03148cd7d4d198e5381947e81" Oct 11 10:26:27.146850 master-2 kubenswrapper[4776]: E1011 10:26:27.146806 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" podUID="f022eff2d978fee6b366ac18a80aa53c" Oct 11 10:26:27.148820 master-2 kubenswrapper[4776]: I1011 10:26:27.148788 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-2"] Oct 11 10:26:28.125003 master-2 kubenswrapper[4776]: I1011 10:26:28.124894 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/2.log" Oct 11 10:26:28.126118 master-2 kubenswrapper[4776]: I1011 10:26:28.125965 4776 scope.go:117] "RemoveContainer" containerID="edc999680a89e34d61c0b53a78f91660c99759c03148cd7d4d198e5381947e81" Oct 11 10:26:28.126263 master-2 kubenswrapper[4776]: E1011 10:26:28.126201 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" podUID="f022eff2d978fee6b366ac18a80aa53c" Oct 11 10:26:37.294950 master-2 kubenswrapper[4776]: I1011 10:26:37.294849 4776 csr.go:261] certificate signing request csr-l2g8v is approved, waiting to be issued Oct 11 10:26:37.306558 master-2 kubenswrapper[4776]: I1011 10:26:37.306482 4776 csr.go:257] certificate signing request csr-l2g8v is issued Oct 11 10:26:38.308328 master-2 kubenswrapper[4776]: I1011 10:26:38.308221 4776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-10-12 10:21:12 +0000 UTC, rotation deadline is 2025-10-12 03:47:49.858757141 +0000 UTC Oct 11 10:26:38.309258 master-2 kubenswrapper[4776]: I1011 10:26:38.308897 4776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h21m11.549871315s for next certificate rotation Oct 11 10:26:39.310154 master-2 kubenswrapper[4776]: I1011 10:26:39.310012 4776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-10-12 10:21:12 +0000 UTC, rotation deadline is 2025-10-12 03:14:41.747694024 +0000 UTC Oct 11 10:26:39.310154 master-2 kubenswrapper[4776]: I1011 10:26:39.310093 4776 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 16h48m2.437607621s for next certificate rotation Oct 11 10:26:43.058969 master-2 kubenswrapper[4776]: I1011 10:26:43.058831 4776 scope.go:117] "RemoveContainer" containerID="edc999680a89e34d61c0b53a78f91660c99759c03148cd7d4d198e5381947e81" Oct 11 10:26:43.059958 master-2 kubenswrapper[4776]: E1011 10:26:43.059205 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" podUID="f022eff2d978fee6b366ac18a80aa53c" Oct 11 10:26:46.594624 master-2 kubenswrapper[4776]: I1011 10:26:46.594483 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx"] Oct 11 10:26:46.595573 master-2 kubenswrapper[4776]: I1011 10:26:46.594900 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.598739 master-2 kubenswrapper[4776]: I1011 10:26:46.598652 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 11 10:26:46.599061 master-2 kubenswrapper[4776]: I1011 10:26:46.598928 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 11 10:26:46.599432 master-2 kubenswrapper[4776]: I1011 10:26:46.599388 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 11 10:26:46.641588 master-2 kubenswrapper[4776]: I1011 10:26:46.641489 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b7b07707-84bd-43a6-a43d-6680decaa210-etc-ssl-certs\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.641588 master-2 kubenswrapper[4776]: I1011 10:26:46.641554 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b7b07707-84bd-43a6-a43d-6680decaa210-etc-cvo-updatepayloads\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.641588 master-2 kubenswrapper[4776]: I1011 10:26:46.641586 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.641982 master-2 kubenswrapper[4776]: I1011 10:26:46.641613 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b07707-84bd-43a6-a43d-6680decaa210-service-ca\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.641982 master-2 kubenswrapper[4776]: I1011 10:26:46.641639 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7b07707-84bd-43a6-a43d-6680decaa210-kube-api-access\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742106 master-2 kubenswrapper[4776]: I1011 10:26:46.741925 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b7b07707-84bd-43a6-a43d-6680decaa210-etc-ssl-certs\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742106 master-2 kubenswrapper[4776]: I1011 10:26:46.742043 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b7b07707-84bd-43a6-a43d-6680decaa210-etc-cvo-updatepayloads\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742106 master-2 kubenswrapper[4776]: I1011 10:26:46.742101 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742511 master-2 kubenswrapper[4776]: I1011 10:26:46.742155 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b07707-84bd-43a6-a43d-6680decaa210-service-ca\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742511 master-2 kubenswrapper[4776]: I1011 10:26:46.742204 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7b07707-84bd-43a6-a43d-6680decaa210-kube-api-access\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742511 master-2 kubenswrapper[4776]: I1011 10:26:46.742233 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b7b07707-84bd-43a6-a43d-6680decaa210-etc-cvo-updatepayloads\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.742511 master-2 kubenswrapper[4776]: E1011 10:26:46.742453 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:46.742905 master-2 kubenswrapper[4776]: E1011 10:26:46.742736 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:47.242591835 +0000 UTC m=+42.027018574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:46.742905 master-2 kubenswrapper[4776]: I1011 10:26:46.742835 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b7b07707-84bd-43a6-a43d-6680decaa210-etc-ssl-certs\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.743460 master-2 kubenswrapper[4776]: I1011 10:26:46.743336 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7b07707-84bd-43a6-a43d-6680decaa210-service-ca\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:46.763232 master-2 kubenswrapper[4776]: I1011 10:26:46.763129 4776 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 11 10:26:46.773623 master-2 kubenswrapper[4776]: I1011 10:26:46.773506 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7b07707-84bd-43a6-a43d-6680decaa210-kube-api-access\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:47.245725 master-2 kubenswrapper[4776]: I1011 10:26:47.245530 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:47.246026 master-2 kubenswrapper[4776]: E1011 10:26:47.245838 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:47.246026 master-2 kubenswrapper[4776]: E1011 10:26:47.245944 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:48.245909058 +0000 UTC m=+43.030335817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:48.253083 master-2 kubenswrapper[4776]: I1011 10:26:48.253045 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:48.253812 master-2 kubenswrapper[4776]: E1011 10:26:48.253152 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:48.253812 master-2 kubenswrapper[4776]: E1011 10:26:48.253201 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:50.253186541 +0000 UTC m=+45.037613250 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:50.268374 master-2 kubenswrapper[4776]: I1011 10:26:50.268215 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:50.269474 master-2 kubenswrapper[4776]: E1011 10:26:50.268474 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:50.269474 master-2 kubenswrapper[4776]: E1011 10:26:50.268585 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:26:54.268546097 +0000 UTC m=+49.052972846 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:54.298533 master-2 kubenswrapper[4776]: I1011 10:26:54.298424 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:26:54.299134 master-2 kubenswrapper[4776]: E1011 10:26:54.298604 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:54.299134 master-2 kubenswrapper[4776]: E1011 10:26:54.298714 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:27:02.298661363 +0000 UTC m=+57.083088092 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:26:58.058993 master-2 kubenswrapper[4776]: I1011 10:26:58.058894 4776 scope.go:117] "RemoveContainer" containerID="edc999680a89e34d61c0b53a78f91660c99759c03148cd7d4d198e5381947e81" Oct 11 10:26:59.197340 master-2 kubenswrapper[4776]: I1011 10:26:59.197272 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-2_f022eff2d978fee6b366ac18a80aa53c/kube-rbac-proxy-crio/2.log" Oct 11 10:26:59.198083 master-2 kubenswrapper[4776]: I1011 10:26:59.197883 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" event={"ID":"f022eff2d978fee6b366ac18a80aa53c","Type":"ContainerStarted","Data":"160ea2b844bd95411f1fc839160ddbfb5ba513bcdf40167a2589c9e26bd964ad"} Oct 11 10:27:02.357854 master-2 kubenswrapper[4776]: I1011 10:27:02.357768 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:27:02.358395 master-2 kubenswrapper[4776]: E1011 10:27:02.357936 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:27:02.358395 master-2 kubenswrapper[4776]: E1011 10:27:02.358037 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:27:18.358014424 +0000 UTC m=+73.142441143 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:27:13.013720 master-2 kubenswrapper[4776]: I1011 10:27:13.013464 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-2" podStartSLOduration=46.013447558 podStartE2EDuration="46.013447558s" podCreationTimestamp="2025-10-11 10:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:26:59.2134733 +0000 UTC m=+53.997900019" watchObservedRunningTime="2025-10-11 10:27:13.013447558 +0000 UTC m=+67.797874267" Oct 11 10:27:13.013720 master-2 kubenswrapper[4776]: I1011 10:27:13.013585 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xssj7"] Oct 11 10:27:13.014540 master-2 kubenswrapper[4776]: I1011 10:27:13.013771 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.016955 master-2 kubenswrapper[4776]: I1011 10:27:13.016919 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 11 10:27:13.017294 master-2 kubenswrapper[4776]: I1011 10:27:13.017259 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 11 10:27:13.017485 master-2 kubenswrapper[4776]: I1011 10:27:13.017443 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 11 10:27:13.023424 master-2 kubenswrapper[4776]: I1011 10:27:13.023362 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 11 10:27:13.129423 master-2 kubenswrapper[4776]: I1011 10:27:13.129306 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-conf-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129423 master-2 kubenswrapper[4776]: I1011 10:27:13.129426 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-daemon-config\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129715 master-2 kubenswrapper[4776]: I1011 10:27:13.129461 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gchtk\" (UniqueName: \"kubernetes.io/projected/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-kube-api-access-gchtk\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129715 master-2 kubenswrapper[4776]: I1011 10:27:13.129501 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-hostroot\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129715 master-2 kubenswrapper[4776]: I1011 10:27:13.129599 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-multus-certs\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129840 master-2 kubenswrapper[4776]: I1011 10:27:13.129741 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-k8s-cni-cncf-io\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129840 master-2 kubenswrapper[4776]: I1011 10:27:13.129785 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-netns\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129840 master-2 kubenswrapper[4776]: I1011 10:27:13.129818 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-cni-bin\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129953 master-2 kubenswrapper[4776]: I1011 10:27:13.129850 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-cni-multus\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129953 master-2 kubenswrapper[4776]: I1011 10:27:13.129890 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-cni-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.129953 master-2 kubenswrapper[4776]: I1011 10:27:13.129922 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-cni-binary-copy\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.130063 master-2 kubenswrapper[4776]: I1011 10:27:13.129953 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-cnibin\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.130063 master-2 kubenswrapper[4776]: I1011 10:27:13.129984 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-os-release\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.130063 master-2 kubenswrapper[4776]: I1011 10:27:13.130013 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-socket-dir-parent\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.130063 master-2 kubenswrapper[4776]: I1011 10:27:13.130044 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-kubelet\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.130232 master-2 kubenswrapper[4776]: I1011 10:27:13.130074 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-etc-kubernetes\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.130232 master-2 kubenswrapper[4776]: I1011 10:27:13.130108 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-system-cni-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.214360 master-2 kubenswrapper[4776]: I1011 10:27:13.214254 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-tmg2p"] Oct 11 10:27:13.214848 master-2 kubenswrapper[4776]: I1011 10:27:13.214802 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.218493 master-2 kubenswrapper[4776]: I1011 10:27:13.218414 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Oct 11 10:27:13.218493 master-2 kubenswrapper[4776]: I1011 10:27:13.218427 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 11 10:27:13.231496 master-2 kubenswrapper[4776]: I1011 10:27:13.231416 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchtk\" (UniqueName: \"kubernetes.io/projected/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-kube-api-access-gchtk\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231498 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-conf-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231551 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-daemon-config\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231611 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-k8s-cni-cncf-io\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231646 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-netns\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231745 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-hostroot\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231784 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-multus-certs\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231824 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-cni-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231859 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-cni-binary-copy\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231862 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-netns\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231862 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-k8s-cni-cncf-io\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231901 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-cni-bin\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.231976 master-2 kubenswrapper[4776]: I1011 10:27:13.231965 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-hostroot\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.231992 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-conf-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232055 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-cni-multus\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.231994 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-cni-bin\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232116 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-socket-dir-parent\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.231975 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-run-multus-certs\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232028 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-cni-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232171 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-kubelet\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232196 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-cni-multus\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232261 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-etc-kubernetes\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232273 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-host-var-lib-kubelet\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232335 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-system-cni-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232408 master-2 kubenswrapper[4776]: I1011 10:27:13.232379 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-cnibin\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232733 master-2 kubenswrapper[4776]: I1011 10:27:13.232422 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-os-release\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232733 master-2 kubenswrapper[4776]: I1011 10:27:13.232433 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-socket-dir-parent\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232733 master-2 kubenswrapper[4776]: I1011 10:27:13.232450 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-cnibin\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232733 master-2 kubenswrapper[4776]: I1011 10:27:13.232466 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-system-cni-dir\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232733 master-2 kubenswrapper[4776]: I1011 10:27:13.232495 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-etc-kubernetes\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.232733 master-2 kubenswrapper[4776]: I1011 10:27:13.232575 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-os-release\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.233075 master-2 kubenswrapper[4776]: I1011 10:27:13.233010 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-multus-daemon-config\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.233223 master-2 kubenswrapper[4776]: I1011 10:27:13.233178 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-cni-binary-copy\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.253059 master-2 kubenswrapper[4776]: I1011 10:27:13.253004 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gchtk\" (UniqueName: \"kubernetes.io/projected/9e810b8c-5973-4846-b19f-cd8aa3c4ba3e-kube-api-access-gchtk\") pod \"multus-xssj7\" (UID: \"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e\") " pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.333116 master-2 kubenswrapper[4776]: I1011 10:27:13.332962 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cnibin\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333116 master-2 kubenswrapper[4776]: I1011 10:27:13.333028 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dftzf\" (UniqueName: \"kubernetes.io/projected/5839b979-8c02-4e0d-9dc1-b1843d8ce872-kube-api-access-dftzf\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333116 master-2 kubenswrapper[4776]: I1011 10:27:13.333066 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-os-release\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333116 master-2 kubenswrapper[4776]: I1011 10:27:13.333096 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-whereabouts-configmap\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333494 master-2 kubenswrapper[4776]: I1011 10:27:13.333129 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cni-binary-copy\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333494 master-2 kubenswrapper[4776]: I1011 10:27:13.333276 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333494 master-2 kubenswrapper[4776]: I1011 10:27:13.333392 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.333494 master-2 kubenswrapper[4776]: I1011 10:27:13.333446 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-system-cni-dir\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.337247 master-2 kubenswrapper[4776]: I1011 10:27:13.337175 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xssj7" Oct 11 10:27:13.353472 master-2 kubenswrapper[4776]: W1011 10:27:13.353358 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e810b8c_5973_4846_b19f_cd8aa3c4ba3e.slice/crio-b341838eb4495fbaf603b6875237902c094b327a334b8257673608dda4050f08 WatchSource:0}: Error finding container b341838eb4495fbaf603b6875237902c094b327a334b8257673608dda4050f08: Status 404 returned error can't find the container with id b341838eb4495fbaf603b6875237902c094b327a334b8257673608dda4050f08 Oct 11 10:27:13.434272 master-2 kubenswrapper[4776]: I1011 10:27:13.434071 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cni-binary-copy\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.434272 master-2 kubenswrapper[4776]: I1011 10:27:13.434202 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-system-cni-dir\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.434272 master-2 kubenswrapper[4776]: I1011 10:27:13.434303 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434351 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434379 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-system-cni-dir\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434403 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cnibin\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434481 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cnibin\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434501 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dftzf\" (UniqueName: \"kubernetes.io/projected/5839b979-8c02-4e0d-9dc1-b1843d8ce872-kube-api-access-dftzf\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434551 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-os-release\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434582 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-whereabouts-configmap\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434759 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-os-release\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435129 master-2 kubenswrapper[4776]: I1011 10:27:13.434914 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5839b979-8c02-4e0d-9dc1-b1843d8ce872-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435847 master-2 kubenswrapper[4776]: I1011 10:27:13.435757 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-whereabouts-configmap\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.435976 master-2 kubenswrapper[4776]: I1011 10:27:13.435910 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.436049 master-2 kubenswrapper[4776]: I1011 10:27:13.435910 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5839b979-8c02-4e0d-9dc1-b1843d8ce872-cni-binary-copy\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.466039 master-2 kubenswrapper[4776]: I1011 10:27:13.465941 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dftzf\" (UniqueName: \"kubernetes.io/projected/5839b979-8c02-4e0d-9dc1-b1843d8ce872-kube-api-access-dftzf\") pod \"multus-additional-cni-plugins-tmg2p\" (UID: \"5839b979-8c02-4e0d-9dc1-b1843d8ce872\") " pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.529157 master-2 kubenswrapper[4776]: I1011 10:27:13.529051 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" Oct 11 10:27:13.545525 master-2 kubenswrapper[4776]: W1011 10:27:13.545443 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5839b979_8c02_4e0d_9dc1_b1843d8ce872.slice/crio-0e1e52b36d983d17c1cf620828a7b853dc2d0ca6951947ac524751cb0f76991e WatchSource:0}: Error finding container 0e1e52b36d983d17c1cf620828a7b853dc2d0ca6951947ac524751cb0f76991e: Status 404 returned error can't find the container with id 0e1e52b36d983d17c1cf620828a7b853dc2d0ca6951947ac524751cb0f76991e Oct 11 10:27:13.998813 master-2 kubenswrapper[4776]: I1011 10:27:13.998664 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-w52cn"] Oct 11 10:27:13.999400 master-2 kubenswrapper[4776]: I1011 10:27:13.999351 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:13.999542 master-2 kubenswrapper[4776]: E1011 10:27:13.999481 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:14.039937 master-2 kubenswrapper[4776]: I1011 10:27:14.039835 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:14.040837 master-2 kubenswrapper[4776]: I1011 10:27:14.040006 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bn98\" (UniqueName: \"kubernetes.io/projected/35b21a7b-2a5a-4511-a2d5-d950752b4bda-kube-api-access-9bn98\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:14.140839 master-2 kubenswrapper[4776]: I1011 10:27:14.140759 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:14.141014 master-2 kubenswrapper[4776]: I1011 10:27:14.140869 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bn98\" (UniqueName: \"kubernetes.io/projected/35b21a7b-2a5a-4511-a2d5-d950752b4bda-kube-api-access-9bn98\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:14.141014 master-2 kubenswrapper[4776]: E1011 10:27:14.140967 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:14.141301 master-2 kubenswrapper[4776]: E1011 10:27:14.141281 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:27:14.641246187 +0000 UTC m=+69.425672936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:14.171716 master-2 kubenswrapper[4776]: I1011 10:27:14.171624 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bn98\" (UniqueName: \"kubernetes.io/projected/35b21a7b-2a5a-4511-a2d5-d950752b4bda-kube-api-access-9bn98\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:14.230709 master-2 kubenswrapper[4776]: I1011 10:27:14.230557 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerStarted","Data":"0e1e52b36d983d17c1cf620828a7b853dc2d0ca6951947ac524751cb0f76991e"} Oct 11 10:27:14.232225 master-2 kubenswrapper[4776]: I1011 10:27:14.232169 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xssj7" event={"ID":"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e","Type":"ContainerStarted","Data":"b341838eb4495fbaf603b6875237902c094b327a334b8257673608dda4050f08"} Oct 11 10:27:14.645222 master-2 kubenswrapper[4776]: I1011 10:27:14.645130 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:14.645406 master-2 kubenswrapper[4776]: E1011 10:27:14.645336 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:14.645451 master-2 kubenswrapper[4776]: E1011 10:27:14.645420 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:27:15.645400412 +0000 UTC m=+70.429827111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:15.651753 master-2 kubenswrapper[4776]: I1011 10:27:15.651655 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:15.652257 master-2 kubenswrapper[4776]: E1011 10:27:15.651798 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:15.652257 master-2 kubenswrapper[4776]: E1011 10:27:15.651866 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:27:17.651850268 +0000 UTC m=+72.436276977 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:16.058668 master-2 kubenswrapper[4776]: I1011 10:27:16.058173 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:16.058668 master-2 kubenswrapper[4776]: E1011 10:27:16.058552 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:17.239057 master-2 kubenswrapper[4776]: I1011 10:27:17.238831 4776 generic.go:334] "Generic (PLEG): container finished" podID="5839b979-8c02-4e0d-9dc1-b1843d8ce872" containerID="9262160bc177411fc7cf6da6d14f6188e43faa873f8e3c2271486fbddfecfb2d" exitCode=0 Oct 11 10:27:17.239057 master-2 kubenswrapper[4776]: I1011 10:27:17.238924 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerDied","Data":"9262160bc177411fc7cf6da6d14f6188e43faa873f8e3c2271486fbddfecfb2d"} Oct 11 10:27:17.664659 master-2 kubenswrapper[4776]: I1011 10:27:17.664556 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:17.665071 master-2 kubenswrapper[4776]: E1011 10:27:17.664746 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:17.665071 master-2 kubenswrapper[4776]: E1011 10:27:17.664833 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:27:21.664815902 +0000 UTC m=+76.449242611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:18.058645 master-2 kubenswrapper[4776]: I1011 10:27:18.058240 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:18.058645 master-2 kubenswrapper[4776]: E1011 10:27:18.058361 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:18.369604 master-2 kubenswrapper[4776]: I1011 10:27:18.369447 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:27:18.370154 master-2 kubenswrapper[4776]: E1011 10:27:18.369648 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:27:18.370154 master-2 kubenswrapper[4776]: E1011 10:27:18.369742 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:27:50.36972206 +0000 UTC m=+105.154148769 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:27:20.057937 master-2 kubenswrapper[4776]: I1011 10:27:20.057897 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:20.058500 master-2 kubenswrapper[4776]: E1011 10:27:20.058030 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:21.693942 master-2 kubenswrapper[4776]: I1011 10:27:21.693873 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:21.694501 master-2 kubenswrapper[4776]: E1011 10:27:21.694005 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:21.694501 master-2 kubenswrapper[4776]: E1011 10:27:21.694056 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:27:29.694043026 +0000 UTC m=+84.478469725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:22.058396 master-2 kubenswrapper[4776]: I1011 10:27:22.058268 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:22.058549 master-2 kubenswrapper[4776]: E1011 10:27:22.058423 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:24.058763 master-2 kubenswrapper[4776]: I1011 10:27:24.058024 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:24.058763 master-2 kubenswrapper[4776]: E1011 10:27:24.058159 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:24.254596 master-2 kubenswrapper[4776]: I1011 10:27:24.254510 4776 generic.go:334] "Generic (PLEG): container finished" podID="5839b979-8c02-4e0d-9dc1-b1843d8ce872" containerID="41121d6fe516e7df58567d18539545a3bcb2156ba2155868d301fff06925c843" exitCode=0 Oct 11 10:27:24.254596 master-2 kubenswrapper[4776]: I1011 10:27:24.254573 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerDied","Data":"41121d6fe516e7df58567d18539545a3bcb2156ba2155868d301fff06925c843"} Oct 11 10:27:24.256581 master-2 kubenswrapper[4776]: I1011 10:27:24.256531 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xssj7" event={"ID":"9e810b8c-5973-4846-b19f-cd8aa3c4ba3e","Type":"ContainerStarted","Data":"0413888b074dd9127214d0d2728c150c3bc3de7dddcf161739d4e47972fedb12"} Oct 11 10:27:24.310334 master-2 kubenswrapper[4776]: I1011 10:27:24.310126 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xssj7" podStartSLOduration=0.879619747 podStartE2EDuration="11.310098816s" podCreationTimestamp="2025-10-11 10:27:13 +0000 UTC" firstStartedPulling="2025-10-11 10:27:13.356219236 +0000 UTC m=+68.140645985" lastFinishedPulling="2025-10-11 10:27:23.786698345 +0000 UTC m=+78.571125054" observedRunningTime="2025-10-11 10:27:24.309941051 +0000 UTC m=+79.094367770" watchObservedRunningTime="2025-10-11 10:27:24.310098816 +0000 UTC m=+79.094525565" Oct 11 10:27:25.397810 master-2 kubenswrapper[4776]: I1011 10:27:25.397601 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k"] Oct 11 10:27:25.398562 master-2 kubenswrapper[4776]: I1011 10:27:25.397865 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.400482 master-2 kubenswrapper[4776]: I1011 10:27:25.400424 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 11 10:27:25.400482 master-2 kubenswrapper[4776]: I1011 10:27:25.400461 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 11 10:27:25.401728 master-2 kubenswrapper[4776]: I1011 10:27:25.401605 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 11 10:27:25.401847 master-2 kubenswrapper[4776]: I1011 10:27:25.401764 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 11 10:27:25.401847 master-2 kubenswrapper[4776]: I1011 10:27:25.401832 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 11 10:27:25.521508 master-2 kubenswrapper[4776]: I1011 10:27:25.521433 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-env-overrides\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.521508 master-2 kubenswrapper[4776]: I1011 10:27:25.521483 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njmgm\" (UniqueName: \"kubernetes.io/projected/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-kube-api-access-njmgm\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.521508 master-2 kubenswrapper[4776]: I1011 10:27:25.521520 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-ovnkube-config\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.521800 master-2 kubenswrapper[4776]: I1011 10:27:25.521597 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.599843 master-2 kubenswrapper[4776]: I1011 10:27:25.599739 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p8m82"] Oct 11 10:27:25.600297 master-2 kubenswrapper[4776]: I1011 10:27:25.600262 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.602737 master-2 kubenswrapper[4776]: I1011 10:27:25.602692 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 11 10:27:25.603865 master-2 kubenswrapper[4776]: I1011 10:27:25.603821 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 11 10:27:25.622309 master-2 kubenswrapper[4776]: I1011 10:27:25.622255 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-ovnkube-config\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.622504 master-2 kubenswrapper[4776]: I1011 10:27:25.622328 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.622504 master-2 kubenswrapper[4776]: I1011 10:27:25.622355 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-env-overrides\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.622504 master-2 kubenswrapper[4776]: I1011 10:27:25.622378 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njmgm\" (UniqueName: \"kubernetes.io/projected/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-kube-api-access-njmgm\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.623075 master-2 kubenswrapper[4776]: I1011 10:27:25.623050 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-ovnkube-config\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.623265 master-2 kubenswrapper[4776]: I1011 10:27:25.623243 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-env-overrides\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.625568 master-2 kubenswrapper[4776]: I1011 10:27:25.625537 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.637528 master-2 kubenswrapper[4776]: I1011 10:27:25.637489 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njmgm\" (UniqueName: \"kubernetes.io/projected/9727aec8-dcb9-40a6-9d8d-2a61f37b6503-kube-api-access-njmgm\") pod \"ovnkube-control-plane-864d695c77-b8x7k\" (UID: \"9727aec8-dcb9-40a6-9d8d-2a61f37b6503\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.710465 master-2 kubenswrapper[4776]: I1011 10:27:25.710407 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" Oct 11 10:27:25.719113 master-2 kubenswrapper[4776]: W1011 10:27:25.719059 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9727aec8_dcb9_40a6_9d8d_2a61f37b6503.slice/crio-03af94422a1be0c2140e577d539730a3a9c5a868e9a5b030d28510ae80c257c7 WatchSource:0}: Error finding container 03af94422a1be0c2140e577d539730a3a9c5a868e9a5b030d28510ae80c257c7: Status 404 returned error can't find the container with id 03af94422a1be0c2140e577d539730a3a9c5a868e9a5b030d28510ae80c257c7 Oct 11 10:27:25.723088 master-2 kubenswrapper[4776]: I1011 10:27:25.723067 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-var-lib-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723168 master-2 kubenswrapper[4776]: I1011 10:27:25.723094 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-netd\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723168 master-2 kubenswrapper[4776]: I1011 10:27:25.723113 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-env-overrides\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723168 master-2 kubenswrapper[4776]: I1011 10:27:25.723127 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-netns\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723168 master-2 kubenswrapper[4776]: I1011 10:27:25.723141 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723168 master-2 kubenswrapper[4776]: I1011 10:27:25.723166 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c908109b-a45d-464d-9ea0-f0823d2cc341-ovn-node-metrics-cert\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723180 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-ovn\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723194 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-kubelet\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723208 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-etc-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723221 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-systemd-units\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723235 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-systemd\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723248 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-bin\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723261 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-config\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723275 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-node-log\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723298 master-2 kubenswrapper[4776]: I1011 10:27:25.723290 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-ovn-kubernetes\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723621 master-2 kubenswrapper[4776]: I1011 10:27:25.723314 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-script-lib\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723621 master-2 kubenswrapper[4776]: I1011 10:27:25.723329 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-log-socket\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723621 master-2 kubenswrapper[4776]: I1011 10:27:25.723347 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-slash\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723621 master-2 kubenswrapper[4776]: I1011 10:27:25.723360 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.723621 master-2 kubenswrapper[4776]: I1011 10:27:25.723376 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxlsk\" (UniqueName: \"kubernetes.io/projected/c908109b-a45d-464d-9ea0-f0823d2cc341-kube-api-access-dxlsk\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823716 master-2 kubenswrapper[4776]: I1011 10:27:25.823659 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c908109b-a45d-464d-9ea0-f0823d2cc341-ovn-node-metrics-cert\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823803 master-2 kubenswrapper[4776]: I1011 10:27:25.823722 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-netns\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823803 master-2 kubenswrapper[4776]: I1011 10:27:25.823747 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823803 master-2 kubenswrapper[4776]: I1011 10:27:25.823769 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-ovn\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823803 master-2 kubenswrapper[4776]: I1011 10:27:25.823788 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-kubelet\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823800 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-netns\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823822 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823834 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-etc-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823807 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-etc-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823848 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-ovn\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823861 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-systemd-units\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823865 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-kubelet\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823881 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-systemd\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823883 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-systemd-units\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823895 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-bin\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823911 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-config\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823905 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-systemd\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.823924 master-2 kubenswrapper[4776]: I1011 10:27:25.823924 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-node-log\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.823941 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-ovn-kubernetes\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.823967 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-script-lib\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.823983 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-log-socket\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.823997 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxlsk\" (UniqueName: \"kubernetes.io/projected/c908109b-a45d-464d-9ea0-f0823d2cc341-kube-api-access-dxlsk\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.823998 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-ovn-kubernetes\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.824024 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-slash\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.824031 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-node-log\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.824038 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.824056 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-var-lib-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.824069 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-netd\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824262 master-2 kubenswrapper[4776]: I1011 10:27:25.824083 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-env-overrides\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824505 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-config\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824512 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-env-overrides\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824540 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-var-lib-openvswitch\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824549 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-slash\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824056 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-log-socket\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824567 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.823940 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-bin\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824516 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-script-lib\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.824609 master-2 kubenswrapper[4776]: I1011 10:27:25.824581 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-netd\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.827526 master-2 kubenswrapper[4776]: I1011 10:27:25.827486 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c908109b-a45d-464d-9ea0-f0823d2cc341-ovn-node-metrics-cert\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.842268 master-2 kubenswrapper[4776]: I1011 10:27:25.842203 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxlsk\" (UniqueName: \"kubernetes.io/projected/c908109b-a45d-464d-9ea0-f0823d2cc341-kube-api-access-dxlsk\") pod \"ovnkube-node-p8m82\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.912844 master-2 kubenswrapper[4776]: I1011 10:27:25.912774 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:25.922030 master-2 kubenswrapper[4776]: W1011 10:27:25.921974 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc908109b_a45d_464d_9ea0_f0823d2cc341.slice/crio-8f67a229933856e98df91cbb3597a852767fb99fd9bf0ca790d3dd81716f751d WatchSource:0}: Error finding container 8f67a229933856e98df91cbb3597a852767fb99fd9bf0ca790d3dd81716f751d: Status 404 returned error can't find the container with id 8f67a229933856e98df91cbb3597a852767fb99fd9bf0ca790d3dd81716f751d Oct 11 10:27:26.058106 master-2 kubenswrapper[4776]: I1011 10:27:26.057987 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:26.058726 master-2 kubenswrapper[4776]: E1011 10:27:26.058640 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:26.264654 master-2 kubenswrapper[4776]: I1011 10:27:26.264565 4776 generic.go:334] "Generic (PLEG): container finished" podID="5839b979-8c02-4e0d-9dc1-b1843d8ce872" containerID="1ec550f2e5d0a274b3db5f617c5df7975cae753de1a01006d83a446ac870ae10" exitCode=0 Oct 11 10:27:26.264654 master-2 kubenswrapper[4776]: I1011 10:27:26.264631 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerDied","Data":"1ec550f2e5d0a274b3db5f617c5df7975cae753de1a01006d83a446ac870ae10"} Oct 11 10:27:26.266520 master-2 kubenswrapper[4776]: I1011 10:27:26.265559 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"8f67a229933856e98df91cbb3597a852767fb99fd9bf0ca790d3dd81716f751d"} Oct 11 10:27:26.267395 master-2 kubenswrapper[4776]: I1011 10:27:26.267329 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" event={"ID":"9727aec8-dcb9-40a6-9d8d-2a61f37b6503","Type":"ContainerStarted","Data":"dc031f4dd9db1fa90da21ee773e117ca3278e0a2094f12e77f4c3fd673ee09ad"} Oct 11 10:27:26.267395 master-2 kubenswrapper[4776]: I1011 10:27:26.267389 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" event={"ID":"9727aec8-dcb9-40a6-9d8d-2a61f37b6503","Type":"ContainerStarted","Data":"03af94422a1be0c2140e577d539730a3a9c5a868e9a5b030d28510ae80c257c7"} Oct 11 10:27:28.058385 master-2 kubenswrapper[4776]: I1011 10:27:28.058341 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:28.058948 master-2 kubenswrapper[4776]: E1011 10:27:28.058480 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:28.276112 master-2 kubenswrapper[4776]: I1011 10:27:28.276055 4776 generic.go:334] "Generic (PLEG): container finished" podID="5839b979-8c02-4e0d-9dc1-b1843d8ce872" containerID="2c03c56b8e58cc1b664bb13193f85f9add5b62239f04b27072c5b46cf41377b7" exitCode=0 Oct 11 10:27:28.276112 master-2 kubenswrapper[4776]: I1011 10:27:28.276097 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerDied","Data":"2c03c56b8e58cc1b664bb13193f85f9add5b62239f04b27072c5b46cf41377b7"} Oct 11 10:27:28.593630 master-2 kubenswrapper[4776]: I1011 10:27:28.593590 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-jdkgd"] Oct 11 10:27:28.593931 master-2 kubenswrapper[4776]: I1011 10:27:28.593910 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:28.594009 master-2 kubenswrapper[4776]: E1011 10:27:28.593976 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:28.661866 master-2 kubenswrapper[4776]: I1011 10:27:28.661799 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:28.763097 master-2 kubenswrapper[4776]: I1011 10:27:28.763042 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:28.788513 master-2 kubenswrapper[4776]: E1011 10:27:28.788444 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:28.788513 master-2 kubenswrapper[4776]: E1011 10:27:28.788493 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:28.788513 master-2 kubenswrapper[4776]: E1011 10:27:28.788515 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:28.788756 master-2 kubenswrapper[4776]: E1011 10:27:28.788603 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:27:29.288580426 +0000 UTC m=+84.073007165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:29.366000 master-2 kubenswrapper[4776]: I1011 10:27:29.365890 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:29.367996 master-2 kubenswrapper[4776]: E1011 10:27:29.366253 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:29.367996 master-2 kubenswrapper[4776]: E1011 10:27:29.366290 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:29.367996 master-2 kubenswrapper[4776]: E1011 10:27:29.366308 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:29.367996 master-2 kubenswrapper[4776]: E1011 10:27:29.366369 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:27:30.366353874 +0000 UTC m=+85.150780583 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:29.769730 master-2 kubenswrapper[4776]: I1011 10:27:29.769648 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:29.769899 master-2 kubenswrapper[4776]: E1011 10:27:29.769784 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:29.769899 master-2 kubenswrapper[4776]: E1011 10:27:29.769848 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:27:45.769827152 +0000 UTC m=+100.554253861 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:30.057642 master-2 kubenswrapper[4776]: I1011 10:27:30.057510 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:30.057642 master-2 kubenswrapper[4776]: I1011 10:27:30.057619 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:30.057959 master-2 kubenswrapper[4776]: E1011 10:27:30.057772 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:30.057959 master-2 kubenswrapper[4776]: E1011 10:27:30.057879 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:30.374821 master-2 kubenswrapper[4776]: I1011 10:27:30.374179 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:30.374821 master-2 kubenswrapper[4776]: E1011 10:27:30.374330 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:30.374821 master-2 kubenswrapper[4776]: E1011 10:27:30.374349 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:30.374821 master-2 kubenswrapper[4776]: E1011 10:27:30.374358 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:30.374821 master-2 kubenswrapper[4776]: E1011 10:27:30.374405 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:27:32.37439136 +0000 UTC m=+87.158818069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:31.194376 master-2 kubenswrapper[4776]: I1011 10:27:31.194306 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vx55j"] Oct 11 10:27:31.194719 master-2 kubenswrapper[4776]: I1011 10:27:31.194665 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.197947 master-2 kubenswrapper[4776]: I1011 10:27:31.197891 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 11 10:27:31.198010 master-2 kubenswrapper[4776]: I1011 10:27:31.197891 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 11 10:27:31.198366 master-2 kubenswrapper[4776]: I1011 10:27:31.198326 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 11 10:27:31.198425 master-2 kubenswrapper[4776]: I1011 10:27:31.198367 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 11 10:27:31.198875 master-2 kubenswrapper[4776]: I1011 10:27:31.198838 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 11 10:27:31.283492 master-2 kubenswrapper[4776]: I1011 10:27:31.283444 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skwch\" (UniqueName: \"kubernetes.io/projected/8526c41d-70a8-42de-a10b-6ad2d5266afb-kube-api-access-skwch\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.283711 master-2 kubenswrapper[4776]: I1011 10:27:31.283513 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8526c41d-70a8-42de-a10b-6ad2d5266afb-env-overrides\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.283711 master-2 kubenswrapper[4776]: I1011 10:27:31.283541 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8526c41d-70a8-42de-a10b-6ad2d5266afb-ovnkube-identity-cm\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.283711 master-2 kubenswrapper[4776]: I1011 10:27:31.283559 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8526c41d-70a8-42de-a10b-6ad2d5266afb-webhook-cert\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.383887 master-2 kubenswrapper[4776]: I1011 10:27:31.383826 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8526c41d-70a8-42de-a10b-6ad2d5266afb-env-overrides\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.383887 master-2 kubenswrapper[4776]: I1011 10:27:31.383868 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8526c41d-70a8-42de-a10b-6ad2d5266afb-ovnkube-identity-cm\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.383887 master-2 kubenswrapper[4776]: I1011 10:27:31.383885 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8526c41d-70a8-42de-a10b-6ad2d5266afb-webhook-cert\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.383887 master-2 kubenswrapper[4776]: I1011 10:27:31.383904 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skwch\" (UniqueName: \"kubernetes.io/projected/8526c41d-70a8-42de-a10b-6ad2d5266afb-kube-api-access-skwch\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.384788 master-2 kubenswrapper[4776]: I1011 10:27:31.384607 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8526c41d-70a8-42de-a10b-6ad2d5266afb-env-overrides\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.385757 master-2 kubenswrapper[4776]: I1011 10:27:31.385666 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8526c41d-70a8-42de-a10b-6ad2d5266afb-ovnkube-identity-cm\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.390505 master-2 kubenswrapper[4776]: I1011 10:27:31.390446 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8526c41d-70a8-42de-a10b-6ad2d5266afb-webhook-cert\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.413382 master-2 kubenswrapper[4776]: I1011 10:27:31.413291 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skwch\" (UniqueName: \"kubernetes.io/projected/8526c41d-70a8-42de-a10b-6ad2d5266afb-kube-api-access-skwch\") pod \"network-node-identity-vx55j\" (UID: \"8526c41d-70a8-42de-a10b-6ad2d5266afb\") " pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:31.507003 master-2 kubenswrapper[4776]: I1011 10:27:31.506844 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vx55j" Oct 11 10:27:32.058644 master-2 kubenswrapper[4776]: I1011 10:27:32.058363 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:32.058644 master-2 kubenswrapper[4776]: E1011 10:27:32.058533 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:32.058644 master-2 kubenswrapper[4776]: I1011 10:27:32.058553 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:32.058992 master-2 kubenswrapper[4776]: E1011 10:27:32.058756 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:32.284465 master-2 kubenswrapper[4776]: I1011 10:27:32.284410 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vx55j" event={"ID":"8526c41d-70a8-42de-a10b-6ad2d5266afb","Type":"ContainerStarted","Data":"fe91f917532a746fbbdfaf481e8f82717686c7ce90037fe05ff4e042c1b0371d"} Oct 11 10:27:32.391398 master-2 kubenswrapper[4776]: I1011 10:27:32.391274 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:32.391874 master-2 kubenswrapper[4776]: E1011 10:27:32.391438 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:32.391874 master-2 kubenswrapper[4776]: E1011 10:27:32.391455 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:32.391874 master-2 kubenswrapper[4776]: E1011 10:27:32.391466 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:32.391874 master-2 kubenswrapper[4776]: E1011 10:27:32.391515 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:27:36.391502234 +0000 UTC m=+91.175928943 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:34.060130 master-2 kubenswrapper[4776]: I1011 10:27:34.060089 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:34.060651 master-2 kubenswrapper[4776]: E1011 10:27:34.060188 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:34.060651 master-2 kubenswrapper[4776]: I1011 10:27:34.060546 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:34.060651 master-2 kubenswrapper[4776]: E1011 10:27:34.060610 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:36.058034 master-2 kubenswrapper[4776]: I1011 10:27:36.057988 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:36.058034 master-2 kubenswrapper[4776]: I1011 10:27:36.058032 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:36.058771 master-2 kubenswrapper[4776]: E1011 10:27:36.058597 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:36.058808 master-2 kubenswrapper[4776]: E1011 10:27:36.058765 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:36.420581 master-2 kubenswrapper[4776]: I1011 10:27:36.420522 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:36.420966 master-2 kubenswrapper[4776]: E1011 10:27:36.420899 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:36.421012 master-2 kubenswrapper[4776]: E1011 10:27:36.420980 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:36.421045 master-2 kubenswrapper[4776]: E1011 10:27:36.421013 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:36.421197 master-2 kubenswrapper[4776]: E1011 10:27:36.421156 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:27:44.421112039 +0000 UTC m=+99.205538898 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:38.060083 master-2 kubenswrapper[4776]: I1011 10:27:38.058226 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:38.060083 master-2 kubenswrapper[4776]: I1011 10:27:38.058282 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:38.060083 master-2 kubenswrapper[4776]: E1011 10:27:38.058342 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:38.060083 master-2 kubenswrapper[4776]: E1011 10:27:38.058408 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:40.058199 master-2 kubenswrapper[4776]: I1011 10:27:40.057757 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:40.058199 master-2 kubenswrapper[4776]: E1011 10:27:40.057872 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:40.058199 master-2 kubenswrapper[4776]: I1011 10:27:40.057765 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:40.058199 master-2 kubenswrapper[4776]: E1011 10:27:40.058081 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:42.057980 master-2 kubenswrapper[4776]: I1011 10:27:42.057912 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:42.058473 master-2 kubenswrapper[4776]: I1011 10:27:42.057912 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:42.058473 master-2 kubenswrapper[4776]: E1011 10:27:42.058115 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:42.058473 master-2 kubenswrapper[4776]: E1011 10:27:42.058270 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:44.058088 master-2 kubenswrapper[4776]: I1011 10:27:44.057658 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:44.058088 master-2 kubenswrapper[4776]: E1011 10:27:44.057787 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:44.058088 master-2 kubenswrapper[4776]: I1011 10:27:44.057658 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:44.058088 master-2 kubenswrapper[4776]: E1011 10:27:44.057950 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:44.313056 master-2 kubenswrapper[4776]: I1011 10:27:44.312945 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerStarted","Data":"978c6f9460e370b0c889c110bf41484945f0806d1b4dcff4d72e01fbd7666b0d"} Oct 11 10:27:44.482208 master-2 kubenswrapper[4776]: I1011 10:27:44.482099 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:44.482398 master-2 kubenswrapper[4776]: E1011 10:27:44.482291 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:27:44.482398 master-2 kubenswrapper[4776]: E1011 10:27:44.482321 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:27:44.482398 master-2 kubenswrapper[4776]: E1011 10:27:44.482336 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:44.482398 master-2 kubenswrapper[4776]: E1011 10:27:44.482394 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:00.482378496 +0000 UTC m=+115.266805215 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:27:45.317837 master-2 kubenswrapper[4776]: I1011 10:27:45.317730 4776 generic.go:334] "Generic (PLEG): container finished" podID="5839b979-8c02-4e0d-9dc1-b1843d8ce872" containerID="978c6f9460e370b0c889c110bf41484945f0806d1b4dcff4d72e01fbd7666b0d" exitCode=0 Oct 11 10:27:45.317837 master-2 kubenswrapper[4776]: I1011 10:27:45.317780 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerDied","Data":"978c6f9460e370b0c889c110bf41484945f0806d1b4dcff4d72e01fbd7666b0d"} Oct 11 10:27:45.790579 master-2 kubenswrapper[4776]: I1011 10:27:45.790509 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:45.790846 master-2 kubenswrapper[4776]: E1011 10:27:45.790732 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:45.790846 master-2 kubenswrapper[4776]: E1011 10:27:45.790794 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:28:17.790775633 +0000 UTC m=+132.575202352 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 10:27:46.058215 master-2 kubenswrapper[4776]: I1011 10:27:46.058165 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:46.058363 master-2 kubenswrapper[4776]: I1011 10:27:46.058175 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:46.059032 master-2 kubenswrapper[4776]: E1011 10:27:46.058979 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:46.059190 master-2 kubenswrapper[4776]: E1011 10:27:46.059164 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:46.322915 master-2 kubenswrapper[4776]: I1011 10:27:46.322795 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vx55j" event={"ID":"8526c41d-70a8-42de-a10b-6ad2d5266afb","Type":"ContainerStarted","Data":"266e9dcc343bc1bf1b6ce1eaa05b6e5c2065e68897c04e6301744ef3e5b512b9"} Oct 11 10:27:46.322915 master-2 kubenswrapper[4776]: I1011 10:27:46.322841 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vx55j" event={"ID":"8526c41d-70a8-42de-a10b-6ad2d5266afb","Type":"ContainerStarted","Data":"ae9f21a0d22bdea1c203b25b4cda8a8d2ad2d414f845e4eeb81d1d50f28205a9"} Oct 11 10:27:46.327607 master-2 kubenswrapper[4776]: I1011 10:27:46.327547 4776 generic.go:334] "Generic (PLEG): container finished" podID="5839b979-8c02-4e0d-9dc1-b1843d8ce872" containerID="6cf8c8572314a0e94c28fae95fa2712ce9abd3b4e7ac60073b5b10fdec3a1b47" exitCode=0 Oct 11 10:27:46.327762 master-2 kubenswrapper[4776]: I1011 10:27:46.327713 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerDied","Data":"6cf8c8572314a0e94c28fae95fa2712ce9abd3b4e7ac60073b5b10fdec3a1b47"} Oct 11 10:27:46.329902 master-2 kubenswrapper[4776]: I1011 10:27:46.329848 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf" exitCode=0 Oct 11 10:27:46.329970 master-2 kubenswrapper[4776]: I1011 10:27:46.329927 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf"} Oct 11 10:27:46.331586 master-2 kubenswrapper[4776]: I1011 10:27:46.331548 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" event={"ID":"9727aec8-dcb9-40a6-9d8d-2a61f37b6503","Type":"ContainerStarted","Data":"e158d51773ea3d54a9f7af87b30a23cf17dc3b04e477c11473ef17096d07a719"} Oct 11 10:27:46.339298 master-2 kubenswrapper[4776]: I1011 10:27:46.339198 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-vx55j" podStartSLOduration=0.972959232 podStartE2EDuration="15.339173244s" podCreationTimestamp="2025-10-11 10:27:31 +0000 UTC" firstStartedPulling="2025-10-11 10:27:31.519036221 +0000 UTC m=+86.303462930" lastFinishedPulling="2025-10-11 10:27:45.885250223 +0000 UTC m=+100.669676942" observedRunningTime="2025-10-11 10:27:46.338479794 +0000 UTC m=+101.122906553" watchObservedRunningTime="2025-10-11 10:27:46.339173244 +0000 UTC m=+101.123599993" Oct 11 10:27:46.353668 master-2 kubenswrapper[4776]: I1011 10:27:46.353566 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k" podStartSLOduration=1.399101989 podStartE2EDuration="21.353539407s" podCreationTimestamp="2025-10-11 10:27:25 +0000 UTC" firstStartedPulling="2025-10-11 10:27:25.853817268 +0000 UTC m=+80.638243977" lastFinishedPulling="2025-10-11 10:27:45.808254686 +0000 UTC m=+100.592681395" observedRunningTime="2025-10-11 10:27:46.35329733 +0000 UTC m=+101.137724039" watchObservedRunningTime="2025-10-11 10:27:46.353539407 +0000 UTC m=+101.137966146" Oct 11 10:27:47.340483 master-2 kubenswrapper[4776]: I1011 10:27:47.339828 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" event={"ID":"5839b979-8c02-4e0d-9dc1-b1843d8ce872","Type":"ContainerStarted","Data":"228e86713ba0cd8b98073c6780ce78979773f3d20eace16d0494588e5185833d"} Oct 11 10:27:47.344711 master-2 kubenswrapper[4776]: I1011 10:27:47.344605 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7"} Oct 11 10:27:47.344861 master-2 kubenswrapper[4776]: I1011 10:27:47.344715 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a"} Oct 11 10:27:47.344861 master-2 kubenswrapper[4776]: I1011 10:27:47.344805 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88"} Oct 11 10:27:47.344861 master-2 kubenswrapper[4776]: I1011 10:27:47.344832 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f"} Oct 11 10:27:47.345133 master-2 kubenswrapper[4776]: I1011 10:27:47.344859 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552"} Oct 11 10:27:47.345133 master-2 kubenswrapper[4776]: I1011 10:27:47.344924 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81"} Oct 11 10:27:47.360815 master-2 kubenswrapper[4776]: I1011 10:27:47.360700 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-tmg2p" podStartSLOduration=4.608614771 podStartE2EDuration="34.360649678s" podCreationTimestamp="2025-10-11 10:27:13 +0000 UTC" firstStartedPulling="2025-10-11 10:27:13.548307962 +0000 UTC m=+68.332734711" lastFinishedPulling="2025-10-11 10:27:43.300342899 +0000 UTC m=+98.084769618" observedRunningTime="2025-10-11 10:27:47.360139453 +0000 UTC m=+102.144566172" watchObservedRunningTime="2025-10-11 10:27:47.360649678 +0000 UTC m=+102.145076427" Oct 11 10:27:48.058002 master-2 kubenswrapper[4776]: I1011 10:27:48.057932 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:48.058002 master-2 kubenswrapper[4776]: I1011 10:27:48.058011 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:48.058398 master-2 kubenswrapper[4776]: E1011 10:27:48.058119 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:48.058398 master-2 kubenswrapper[4776]: E1011 10:27:48.058334 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:49.354821 master-2 kubenswrapper[4776]: I1011 10:27:49.354626 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df"} Oct 11 10:27:50.057745 master-2 kubenswrapper[4776]: I1011 10:27:50.057633 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:50.057745 master-2 kubenswrapper[4776]: I1011 10:27:50.057706 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:50.058087 master-2 kubenswrapper[4776]: E1011 10:27:50.057816 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:50.058087 master-2 kubenswrapper[4776]: E1011 10:27:50.057906 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:50.426853 master-2 kubenswrapper[4776]: I1011 10:27:50.426778 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:27:50.427292 master-2 kubenswrapper[4776]: E1011 10:27:50.426919 4776 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Oct 11 10:27:50.427292 master-2 kubenswrapper[4776]: E1011 10:27:50.427021 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert podName:b7b07707-84bd-43a6-a43d-6680decaa210 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:54.426999384 +0000 UTC m=+169.211426093 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert") pod "cluster-version-operator-55bd67947c-tpbwx" (UID: "b7b07707-84bd-43a6-a43d-6680decaa210") : secret "cluster-version-operator-serving-cert" not found Oct 11 10:27:51.032868 master-2 kubenswrapper[4776]: I1011 10:27:51.032754 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p8m82"] Oct 11 10:27:51.361631 master-2 kubenswrapper[4776]: I1011 10:27:51.361585 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerStarted","Data":"fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700"} Oct 11 10:27:51.361942 master-2 kubenswrapper[4776]: I1011 10:27:51.361901 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:51.362006 master-2 kubenswrapper[4776]: I1011 10:27:51.361954 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:51.362006 master-2 kubenswrapper[4776]: I1011 10:27:51.361949 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-controller" containerID="cri-o://bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81" gracePeriod=30 Oct 11 10:27:51.362006 master-2 kubenswrapper[4776]: I1011 10:27:51.361984 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:51.362006 master-2 kubenswrapper[4776]: I1011 10:27:51.361987 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-node" containerID="cri-o://f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f" gracePeriod=30 Oct 11 10:27:51.362163 master-2 kubenswrapper[4776]: I1011 10:27:51.362005 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="northd" containerID="cri-o://2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a" gracePeriod=30 Oct 11 10:27:51.362163 master-2 kubenswrapper[4776]: I1011 10:27:51.362086 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-acl-logging" containerID="cri-o://5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552" gracePeriod=30 Oct 11 10:27:51.362163 master-2 kubenswrapper[4776]: I1011 10:27:51.362142 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="nbdb" containerID="cri-o://8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7" gracePeriod=30 Oct 11 10:27:51.362275 master-2 kubenswrapper[4776]: I1011 10:27:51.362109 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="sbdb" containerID="cri-o://082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df" gracePeriod=30 Oct 11 10:27:51.362275 master-2 kubenswrapper[4776]: I1011 10:27:51.362106 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88" gracePeriod=30 Oct 11 10:27:51.382859 master-2 kubenswrapper[4776]: I1011 10:27:51.382789 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovnkube-controller" containerID="cri-o://fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700" gracePeriod=30 Oct 11 10:27:52.058503 master-2 kubenswrapper[4776]: I1011 10:27:52.058377 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:52.058503 master-2 kubenswrapper[4776]: I1011 10:27:52.058440 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:52.059160 master-2 kubenswrapper[4776]: E1011 10:27:52.058603 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:52.059160 master-2 kubenswrapper[4776]: E1011 10:27:52.059086 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:52.369984 master-2 kubenswrapper[4776]: I1011 10:27:52.369866 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/kube-rbac-proxy-ovn-metrics/0.log" Oct 11 10:27:52.370653 master-2 kubenswrapper[4776]: I1011 10:27:52.370613 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/kube-rbac-proxy-node/0.log" Oct 11 10:27:52.371333 master-2 kubenswrapper[4776]: I1011 10:27:52.371296 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovn-acl-logging/0.log" Oct 11 10:27:52.372081 master-2 kubenswrapper[4776]: I1011 10:27:52.372042 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovn-controller/0.log" Oct 11 10:27:52.372664 master-2 kubenswrapper[4776]: I1011 10:27:52.372627 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df" exitCode=0 Oct 11 10:27:52.372711 master-2 kubenswrapper[4776]: I1011 10:27:52.372666 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7" exitCode=0 Oct 11 10:27:52.372748 master-2 kubenswrapper[4776]: I1011 10:27:52.372709 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a" exitCode=0 Oct 11 10:27:52.372748 master-2 kubenswrapper[4776]: I1011 10:27:52.372724 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88" exitCode=143 Oct 11 10:27:52.372748 master-2 kubenswrapper[4776]: I1011 10:27:52.372727 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df"} Oct 11 10:27:52.372823 master-2 kubenswrapper[4776]: I1011 10:27:52.372788 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7"} Oct 11 10:27:52.372823 master-2 kubenswrapper[4776]: I1011 10:27:52.372810 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a"} Oct 11 10:27:52.372874 master-2 kubenswrapper[4776]: I1011 10:27:52.372828 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88"} Oct 11 10:27:52.372874 master-2 kubenswrapper[4776]: I1011 10:27:52.372847 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f"} Oct 11 10:27:52.372874 master-2 kubenswrapper[4776]: I1011 10:27:52.372737 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f" exitCode=143 Oct 11 10:27:52.372945 master-2 kubenswrapper[4776]: I1011 10:27:52.372881 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552" exitCode=143 Oct 11 10:27:52.372945 master-2 kubenswrapper[4776]: I1011 10:27:52.372896 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81" exitCode=143 Oct 11 10:27:52.372945 master-2 kubenswrapper[4776]: I1011 10:27:52.372929 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552"} Oct 11 10:27:52.373013 master-2 kubenswrapper[4776]: I1011 10:27:52.372970 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81"} Oct 11 10:27:52.629982 master-2 kubenswrapper[4776]: I1011 10:27:52.629863 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovnkube-controller/0.log" Oct 11 10:27:52.631973 master-2 kubenswrapper[4776]: I1011 10:27:52.631897 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/kube-rbac-proxy-ovn-metrics/0.log" Oct 11 10:27:52.632514 master-2 kubenswrapper[4776]: I1011 10:27:52.632478 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/kube-rbac-proxy-node/0.log" Oct 11 10:27:52.633339 master-2 kubenswrapper[4776]: I1011 10:27:52.633255 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovn-acl-logging/0.log" Oct 11 10:27:52.633790 master-2 kubenswrapper[4776]: I1011 10:27:52.633731 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovn-controller/0.log" Oct 11 10:27:52.634274 master-2 kubenswrapper[4776]: I1011 10:27:52.634253 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:52.741598 master-2 kubenswrapper[4776]: I1011 10:27:52.741514 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x5wg8"] Oct 11 10:27:52.741598 master-2 kubenswrapper[4776]: E1011 10:27:52.741615 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="nbdb" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741629 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="nbdb" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741639 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovnkube-controller" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741648 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovnkube-controller" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741657 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-node" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741666 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-node" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741703 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="northd" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741712 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="northd" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741721 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="sbdb" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741729 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="sbdb" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741737 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-controller" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741745 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-controller" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741753 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-ovn-metrics" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741762 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-ovn-metrics" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741771 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kubecfg-setup" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741781 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kubecfg-setup" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: E1011 10:27:52.741790 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-acl-logging" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741798 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-acl-logging" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741834 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="northd" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741843 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-node" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741866 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="kube-rbac-proxy-ovn-metrics" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741874 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="nbdb" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741882 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-controller" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741890 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovnkube-controller" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741899 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="sbdb" Oct 11 10:27:52.741897 master-2 kubenswrapper[4776]: I1011 10:27:52.741908 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerName="ovn-acl-logging" Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.742442 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.742662 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-netns\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.742871 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.742893 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-openvswitch\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.742972 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.743093 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c908109b-a45d-464d-9ea0-f0823d2cc341-ovn-node-metrics-cert\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.743224 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743354 master-2 kubenswrapper[4776]: I1011 10:27:52.743266 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.743748 master-2 kubenswrapper[4776]: I1011 10:27:52.743464 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-kubelet\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743748 master-2 kubenswrapper[4776]: I1011 10:27:52.743513 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.743748 master-2 kubenswrapper[4776]: I1011 10:27:52.743590 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-config\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743748 master-2 kubenswrapper[4776]: I1011 10:27:52.743641 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-script-lib\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743748 master-2 kubenswrapper[4776]: I1011 10:27:52.743698 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-ovn\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743748 master-2 kubenswrapper[4776]: I1011 10:27:52.743731 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-systemd\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743758 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-log-socket\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743785 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-slash\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743868 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-env-overrides\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743868 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743903 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-var-lib-openvswitch\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743940 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-etc-openvswitch\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743934 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-log-socket" (OuterVolumeSpecName: "log-socket") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743969 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-netd\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.743997 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-bin\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.743999 master-2 kubenswrapper[4776]: I1011 10:27:52.744008 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744026 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-slash" (OuterVolumeSpecName: "host-slash") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744098 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744045 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxlsk\" (UniqueName: \"kubernetes.io/projected/c908109b-a45d-464d-9ea0-f0823d2cc341-kube-api-access-dxlsk\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744085 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744153 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744138 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-systemd-units\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744110 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744178 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-node-log\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744166 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744213 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-node-log" (OuterVolumeSpecName: "node-log") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744200 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-ovn-kubernetes\") pod \"c908109b-a45d-464d-9ea0-f0823d2cc341\" (UID: \"c908109b-a45d-464d-9ea0-f0823d2cc341\") " Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744378 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744395 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-var-lib-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744390 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744401 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:27:52.744771 master-2 kubenswrapper[4776]: I1011 10:27:52.744422 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-cni-netd\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744454 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-ovnkube-config\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744477 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2g8t\" (UniqueName: \"kubernetes.io/projected/b7bd3364-8f2a-492d-917f-acbbe3267954-kube-api-access-h2g8t\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744503 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-run-ovn-kubernetes\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744524 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-slash\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744542 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-run-netns\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744565 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-ovnkube-script-lib\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744585 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7bd3364-8f2a-492d-917f-acbbe3267954-ovn-node-metrics-cert\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744732 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-kubelet\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744812 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-cni-bin\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.744914 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-env-overrides\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.745004 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-etc-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.745100 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-ovn\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.745196 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-systemd\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.745223 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-node-log\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.745247 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.745638 master-2 kubenswrapper[4776]: I1011 10:27:52.745316 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-systemd-units\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745347 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745714 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-log-socket\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745779 4776 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-netns\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745794 4776 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-openvswitch\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745839 4776 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745882 4776 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-kubelet\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745944 4776 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.745978 4776 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-ovnkube-script-lib\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746000 4776 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-ovn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746017 4776 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-log-socket\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746034 4776 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-slash\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746050 4776 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c908109b-a45d-464d-9ea0-f0823d2cc341-env-overrides\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746067 4776 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-var-lib-openvswitch\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746085 4776 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-etc-openvswitch\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746103 4776 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-netd\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746119 4776 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-cni-bin\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746137 4776 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-systemd-units\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746153 4776 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-node-log\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.746441 master-2 kubenswrapper[4776]: I1011 10:27:52.746169 4776 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-host-run-ovn-kubernetes\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.747721 master-2 kubenswrapper[4776]: I1011 10:27:52.747644 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c908109b-a45d-464d-9ea0-f0823d2cc341-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:27:52.748292 master-2 kubenswrapper[4776]: I1011 10:27:52.748221 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c908109b-a45d-464d-9ea0-f0823d2cc341-kube-api-access-dxlsk" (OuterVolumeSpecName: "kube-api-access-dxlsk") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "kube-api-access-dxlsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:27:52.748982 master-2 kubenswrapper[4776]: I1011 10:27:52.748898 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c908109b-a45d-464d-9ea0-f0823d2cc341" (UID: "c908109b-a45d-464d-9ea0-f0823d2cc341"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:27:52.847397 master-2 kubenswrapper[4776]: I1011 10:27:52.847320 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-systemd-units\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847391 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-systemd\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847440 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-node-log\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847461 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-systemd-units\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847466 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847516 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847531 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-log-socket\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847541 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-systemd\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847564 master-2 kubenswrapper[4776]: I1011 10:27:52.847555 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-log-socket\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847573 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-node-log\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847590 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847614 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847634 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-var-lib-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847667 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-cni-netd\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847718 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-var-lib-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847728 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-ovnkube-config\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.847803 master-2 kubenswrapper[4776]: I1011 10:27:52.847783 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2g8t\" (UniqueName: \"kubernetes.io/projected/b7bd3364-8f2a-492d-917f-acbbe3267954-kube-api-access-h2g8t\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848000 master-2 kubenswrapper[4776]: I1011 10:27:52.847820 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-run-ovn-kubernetes\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848000 master-2 kubenswrapper[4776]: I1011 10:27:52.847836 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-cni-netd\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848000 master-2 kubenswrapper[4776]: I1011 10:27:52.847894 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-slash\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848000 master-2 kubenswrapper[4776]: I1011 10:27:52.847856 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-run-ovn-kubernetes\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848000 master-2 kubenswrapper[4776]: I1011 10:27:52.847852 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-slash\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848000 master-2 kubenswrapper[4776]: I1011 10:27:52.847971 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-run-netns\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848148 master-2 kubenswrapper[4776]: I1011 10:27:52.848008 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-ovnkube-script-lib\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848148 master-2 kubenswrapper[4776]: I1011 10:27:52.848045 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-kubelet\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848148 master-2 kubenswrapper[4776]: I1011 10:27:52.848074 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-cni-bin\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848148 master-2 kubenswrapper[4776]: I1011 10:27:52.848100 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-run-netns\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848148 master-2 kubenswrapper[4776]: I1011 10:27:52.848128 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-kubelet\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848272 master-2 kubenswrapper[4776]: I1011 10:27:52.848103 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7bd3364-8f2a-492d-917f-acbbe3267954-ovn-node-metrics-cert\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848272 master-2 kubenswrapper[4776]: I1011 10:27:52.848160 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-host-cni-bin\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848272 master-2 kubenswrapper[4776]: I1011 10:27:52.848205 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-env-overrides\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848345 master-2 kubenswrapper[4776]: I1011 10:27:52.848238 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-etc-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848378 master-2 kubenswrapper[4776]: I1011 10:27:52.848341 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-ovn\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848378 master-2 kubenswrapper[4776]: I1011 10:27:52.848373 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxlsk\" (UniqueName: \"kubernetes.io/projected/c908109b-a45d-464d-9ea0-f0823d2cc341-kube-api-access-dxlsk\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.848432 master-2 kubenswrapper[4776]: I1011 10:27:52.848389 4776 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c908109b-a45d-464d-9ea0-f0823d2cc341-ovn-node-metrics-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.848432 master-2 kubenswrapper[4776]: I1011 10:27:52.848403 4776 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c908109b-a45d-464d-9ea0-f0823d2cc341-run-systemd\") on node \"master-2\" DevicePath \"\"" Oct 11 10:27:52.848432 master-2 kubenswrapper[4776]: I1011 10:27:52.848409 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-ovnkube-config\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848511 master-2 kubenswrapper[4776]: I1011 10:27:52.848441 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-run-ovn\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848511 master-2 kubenswrapper[4776]: I1011 10:27:52.848483 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7bd3364-8f2a-492d-917f-acbbe3267954-etc-openvswitch\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848601 master-2 kubenswrapper[4776]: I1011 10:27:52.848569 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-ovnkube-script-lib\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.848907 master-2 kubenswrapper[4776]: I1011 10:27:52.848875 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7bd3364-8f2a-492d-917f-acbbe3267954-env-overrides\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.850946 master-2 kubenswrapper[4776]: I1011 10:27:52.850912 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7bd3364-8f2a-492d-917f-acbbe3267954-ovn-node-metrics-cert\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:52.866116 master-2 kubenswrapper[4776]: I1011 10:27:52.866066 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2g8t\" (UniqueName: \"kubernetes.io/projected/b7bd3364-8f2a-492d-917f-acbbe3267954-kube-api-access-h2g8t\") pod \"ovnkube-node-x5wg8\" (UID: \"b7bd3364-8f2a-492d-917f-acbbe3267954\") " pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:53.056668 master-2 kubenswrapper[4776]: I1011 10:27:53.056406 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:53.379373 master-2 kubenswrapper[4776]: I1011 10:27:53.379087 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovnkube-controller/0.log" Oct 11 10:27:53.381653 master-2 kubenswrapper[4776]: I1011 10:27:53.381558 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/kube-rbac-proxy-ovn-metrics/0.log" Oct 11 10:27:53.382731 master-2 kubenswrapper[4776]: I1011 10:27:53.382658 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/kube-rbac-proxy-node/0.log" Oct 11 10:27:53.383280 master-2 kubenswrapper[4776]: I1011 10:27:53.383232 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovn-acl-logging/0.log" Oct 11 10:27:53.383979 master-2 kubenswrapper[4776]: I1011 10:27:53.383931 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-p8m82_c908109b-a45d-464d-9ea0-f0823d2cc341/ovn-controller/0.log" Oct 11 10:27:53.384492 master-2 kubenswrapper[4776]: I1011 10:27:53.384443 4776 generic.go:334] "Generic (PLEG): container finished" podID="c908109b-a45d-464d-9ea0-f0823d2cc341" containerID="fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700" exitCode=1 Oct 11 10:27:53.384569 master-2 kubenswrapper[4776]: I1011 10:27:53.384508 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700"} Oct 11 10:27:53.384708 master-2 kubenswrapper[4776]: I1011 10:27:53.384639 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" event={"ID":"c908109b-a45d-464d-9ea0-f0823d2cc341","Type":"ContainerDied","Data":"8f67a229933856e98df91cbb3597a852767fb99fd9bf0ca790d3dd81716f751d"} Oct 11 10:27:53.384788 master-2 kubenswrapper[4776]: I1011 10:27:53.384712 4776 scope.go:117] "RemoveContainer" containerID="fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700" Oct 11 10:27:53.384842 master-2 kubenswrapper[4776]: I1011 10:27:53.384651 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p8m82" Oct 11 10:27:53.386777 master-2 kubenswrapper[4776]: I1011 10:27:53.386741 4776 generic.go:334] "Generic (PLEG): container finished" podID="b7bd3364-8f2a-492d-917f-acbbe3267954" containerID="b3aed0e6bbc92472d45e0f8800eaeb8e8e1992c8df1659a9f1421e62f43ff048" exitCode=0 Oct 11 10:27:53.386856 master-2 kubenswrapper[4776]: I1011 10:27:53.386786 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerDied","Data":"b3aed0e6bbc92472d45e0f8800eaeb8e8e1992c8df1659a9f1421e62f43ff048"} Oct 11 10:27:53.386903 master-2 kubenswrapper[4776]: I1011 10:27:53.386855 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"3c84d7f947367b1e6704e4423e2f88f2d94595023c6e897e5666149c687ce07b"} Oct 11 10:27:53.400846 master-2 kubenswrapper[4776]: I1011 10:27:53.400376 4776 scope.go:117] "RemoveContainer" containerID="082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df" Oct 11 10:27:53.418621 master-2 kubenswrapper[4776]: I1011 10:27:53.418152 4776 scope.go:117] "RemoveContainer" containerID="8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7" Oct 11 10:27:53.433622 master-2 kubenswrapper[4776]: I1011 10:27:53.433557 4776 scope.go:117] "RemoveContainer" containerID="2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a" Oct 11 10:27:53.444832 master-2 kubenswrapper[4776]: I1011 10:27:53.444771 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p8m82"] Oct 11 10:27:53.450344 master-2 kubenswrapper[4776]: I1011 10:27:53.450273 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p8m82"] Oct 11 10:27:53.452911 master-2 kubenswrapper[4776]: I1011 10:27:53.452595 4776 scope.go:117] "RemoveContainer" containerID="091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88" Oct 11 10:27:53.468462 master-2 kubenswrapper[4776]: I1011 10:27:53.468223 4776 scope.go:117] "RemoveContainer" containerID="f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f" Oct 11 10:27:53.480535 master-2 kubenswrapper[4776]: I1011 10:27:53.480413 4776 scope.go:117] "RemoveContainer" containerID="5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552" Oct 11 10:27:53.502951 master-2 kubenswrapper[4776]: I1011 10:27:53.502798 4776 scope.go:117] "RemoveContainer" containerID="bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81" Oct 11 10:27:53.519438 master-2 kubenswrapper[4776]: I1011 10:27:53.519354 4776 scope.go:117] "RemoveContainer" containerID="9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: I1011 10:27:53.529560 4776 scope.go:117] "RemoveContainer" containerID="fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: E1011 10:27:53.529843 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700\": container with ID starting with fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700 not found: ID does not exist" containerID="fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: I1011 10:27:53.529871 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700"} err="failed to get container status \"fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700\": rpc error: code = NotFound desc = could not find container \"fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700\": container with ID starting with fe6c9a5014ee8609b0653542902aff4e66b126949dff3dd287d775e46d3df700 not found: ID does not exist" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: I1011 10:27:53.529915 4776 scope.go:117] "RemoveContainer" containerID="082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: E1011 10:27:53.530188 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df\": container with ID starting with 082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df not found: ID does not exist" containerID="082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: I1011 10:27:53.530209 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df"} err="failed to get container status \"082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df\": rpc error: code = NotFound desc = could not find container \"082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df\": container with ID starting with 082c40b8a30c4d0599806ef48ee340075dccefb25a9fc973aa93892ebda416df not found: ID does not exist" Oct 11 10:27:53.530214 master-2 kubenswrapper[4776]: I1011 10:27:53.530224 4776 scope.go:117] "RemoveContainer" containerID="8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7" Oct 11 10:27:53.530644 master-2 kubenswrapper[4776]: E1011 10:27:53.530391 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7\": container with ID starting with 8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7 not found: ID does not exist" containerID="8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7" Oct 11 10:27:53.530644 master-2 kubenswrapper[4776]: I1011 10:27:53.530409 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7"} err="failed to get container status \"8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7\": rpc error: code = NotFound desc = could not find container \"8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7\": container with ID starting with 8160b136c800bce2ae374514e9501618562f290eaf2ab411f58000f4189d04e7 not found: ID does not exist" Oct 11 10:27:53.530644 master-2 kubenswrapper[4776]: I1011 10:27:53.530421 4776 scope.go:117] "RemoveContainer" containerID="2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a" Oct 11 10:27:53.530876 master-2 kubenswrapper[4776]: E1011 10:27:53.530748 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a\": container with ID starting with 2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a not found: ID does not exist" containerID="2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a" Oct 11 10:27:53.530876 master-2 kubenswrapper[4776]: I1011 10:27:53.530769 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a"} err="failed to get container status \"2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a\": rpc error: code = NotFound desc = could not find container \"2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a\": container with ID starting with 2a17073b454a1473f29e4c32d662ee2eb0a802adce6cccd3044f91457511c67a not found: ID does not exist" Oct 11 10:27:53.530876 master-2 kubenswrapper[4776]: I1011 10:27:53.530783 4776 scope.go:117] "RemoveContainer" containerID="091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88" Oct 11 10:27:53.531048 master-2 kubenswrapper[4776]: E1011 10:27:53.530986 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88\": container with ID starting with 091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88 not found: ID does not exist" containerID="091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88" Oct 11 10:27:53.531048 master-2 kubenswrapper[4776]: I1011 10:27:53.531007 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88"} err="failed to get container status \"091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88\": rpc error: code = NotFound desc = could not find container \"091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88\": container with ID starting with 091dd912a436b6faa0ac0c467c12dc5482a150157e5dab40b3a4890953c53f88 not found: ID does not exist" Oct 11 10:27:53.531048 master-2 kubenswrapper[4776]: I1011 10:27:53.531022 4776 scope.go:117] "RemoveContainer" containerID="f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f" Oct 11 10:27:53.531623 master-2 kubenswrapper[4776]: E1011 10:27:53.531537 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f\": container with ID starting with f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f not found: ID does not exist" containerID="f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f" Oct 11 10:27:53.531623 master-2 kubenswrapper[4776]: I1011 10:27:53.531561 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f"} err="failed to get container status \"f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f\": rpc error: code = NotFound desc = could not find container \"f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f\": container with ID starting with f566cfb2340482c7f2c986d7b276146e290ceaeeaf868a49241bdfcbf369e53f not found: ID does not exist" Oct 11 10:27:53.531623 master-2 kubenswrapper[4776]: I1011 10:27:53.531576 4776 scope.go:117] "RemoveContainer" containerID="5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552" Oct 11 10:27:53.532726 master-2 kubenswrapper[4776]: E1011 10:27:53.532406 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552\": container with ID starting with 5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552 not found: ID does not exist" containerID="5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552" Oct 11 10:27:53.532726 master-2 kubenswrapper[4776]: I1011 10:27:53.532512 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552"} err="failed to get container status \"5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552\": rpc error: code = NotFound desc = could not find container \"5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552\": container with ID starting with 5125de2f4b5fd1d9b0ca183a2d1e38f5ff329027c92b1eec8fcd106b498a3552 not found: ID does not exist" Oct 11 10:27:53.532726 master-2 kubenswrapper[4776]: I1011 10:27:53.532589 4776 scope.go:117] "RemoveContainer" containerID="bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81" Oct 11 10:27:53.533283 master-2 kubenswrapper[4776]: E1011 10:27:53.533239 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81\": container with ID starting with bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81 not found: ID does not exist" containerID="bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81" Oct 11 10:27:53.533322 master-2 kubenswrapper[4776]: I1011 10:27:53.533292 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81"} err="failed to get container status \"bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81\": rpc error: code = NotFound desc = could not find container \"bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81\": container with ID starting with bf39fd937b0c3c3214afa090a806b1195d99e243e56cb663c778aa3f2cecad81 not found: ID does not exist" Oct 11 10:27:53.533352 master-2 kubenswrapper[4776]: I1011 10:27:53.533330 4776 scope.go:117] "RemoveContainer" containerID="9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf" Oct 11 10:27:53.533638 master-2 kubenswrapper[4776]: E1011 10:27:53.533612 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf\": container with ID starting with 9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf not found: ID does not exist" containerID="9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf" Oct 11 10:27:53.533638 master-2 kubenswrapper[4776]: I1011 10:27:53.533632 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf"} err="failed to get container status \"9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf\": rpc error: code = NotFound desc = could not find container \"9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf\": container with ID starting with 9f27ab47125b8898a3acd9c0d642a46c72b3fca0b8d884da468e7fad2fb4b4bf not found: ID does not exist" Oct 11 10:27:54.058415 master-2 kubenswrapper[4776]: I1011 10:27:54.058336 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:54.058595 master-2 kubenswrapper[4776]: I1011 10:27:54.058336 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:54.058664 master-2 kubenswrapper[4776]: E1011 10:27:54.058605 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:54.058664 master-2 kubenswrapper[4776]: E1011 10:27:54.058466 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:54.064391 master-2 kubenswrapper[4776]: I1011 10:27:54.064349 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c908109b-a45d-464d-9ea0-f0823d2cc341" path="/var/lib/kubelet/pods/c908109b-a45d-464d-9ea0-f0823d2cc341/volumes" Oct 11 10:27:54.394222 master-2 kubenswrapper[4776]: I1011 10:27:54.394147 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"cc1598a4280245cab1f7a4fbea20199177a785ee92e9d62194ceca67349d3714"} Oct 11 10:27:54.394222 master-2 kubenswrapper[4776]: I1011 10:27:54.394196 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"d442efc1b44b6f95f4b75faeec2f7d5b3deac6b7b138cbc3871630d947eabc45"} Oct 11 10:27:54.394222 master-2 kubenswrapper[4776]: I1011 10:27:54.394209 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"728ce00595d9265f53bf5fbf1d588ecd2ed424cf93b146811d6c3f08d82584b6"} Oct 11 10:27:54.394222 master-2 kubenswrapper[4776]: I1011 10:27:54.394222 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"886ab5820c28e6480d00698580e79e4781c20f7b130fa459da47233902f43417"} Oct 11 10:27:54.394222 master-2 kubenswrapper[4776]: I1011 10:27:54.394232 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"4d72efb914bdc3ea62ac41cf6038365dd833039cf28930aafc4d0e0130f12055"} Oct 11 10:27:54.394222 master-2 kubenswrapper[4776]: I1011 10:27:54.394244 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"776a37488dc34a6237bc811855d780600eb2615467f8e88048305ef984cd3514"} Oct 11 10:27:56.058107 master-2 kubenswrapper[4776]: I1011 10:27:56.058060 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:56.058842 master-2 kubenswrapper[4776]: I1011 10:27:56.058107 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:56.058842 master-2 kubenswrapper[4776]: E1011 10:27:56.058236 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:56.058842 master-2 kubenswrapper[4776]: E1011 10:27:56.058333 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:56.408492 master-2 kubenswrapper[4776]: I1011 10:27:56.408423 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"29d5b2f57601ccd97e0b67297507c979bbda2eb904fb57963f2ba752d9aac90a"} Oct 11 10:27:58.057896 master-2 kubenswrapper[4776]: I1011 10:27:58.057781 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:58.058655 master-2 kubenswrapper[4776]: I1011 10:27:58.057786 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:58.058655 master-2 kubenswrapper[4776]: E1011 10:27:58.058037 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:58.058655 master-2 kubenswrapper[4776]: E1011 10:27:58.058124 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:27:58.416995 master-2 kubenswrapper[4776]: I1011 10:27:58.416716 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" event={"ID":"b7bd3364-8f2a-492d-917f-acbbe3267954","Type":"ContainerStarted","Data":"179a33dd9b2c47cd10b8c7507158e6874a3b4b5607b9ce19ef0de9c49a47da08"} Oct 11 10:27:58.417119 master-2 kubenswrapper[4776]: I1011 10:27:58.417067 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:58.417119 master-2 kubenswrapper[4776]: I1011 10:27:58.417082 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:58.440013 master-2 kubenswrapper[4776]: I1011 10:27:58.439901 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" podStartSLOduration=6.439877899 podStartE2EDuration="6.439877899s" podCreationTimestamp="2025-10-11 10:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:27:58.439222411 +0000 UTC m=+113.223649120" watchObservedRunningTime="2025-10-11 10:27:58.439877899 +0000 UTC m=+113.224304628" Oct 11 10:27:59.419408 master-2 kubenswrapper[4776]: I1011 10:27:59.419359 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:27:59.615147 master-2 kubenswrapper[4776]: I1011 10:27:59.614912 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-w52cn"] Oct 11 10:27:59.615147 master-2 kubenswrapper[4776]: I1011 10:27:59.615065 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:27:59.615451 master-2 kubenswrapper[4776]: E1011 10:27:59.615190 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:27:59.618356 master-2 kubenswrapper[4776]: I1011 10:27:59.618284 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-jdkgd"] Oct 11 10:27:59.618531 master-2 kubenswrapper[4776]: I1011 10:27:59.618394 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:27:59.618531 master-2 kubenswrapper[4776]: E1011 10:27:59.618485 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:28:00.513463 master-2 kubenswrapper[4776]: I1011 10:28:00.513284 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:00.514365 master-2 kubenswrapper[4776]: E1011 10:28:00.513479 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 10:28:00.514365 master-2 kubenswrapper[4776]: E1011 10:28:00.513519 4776 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 10:28:00.514365 master-2 kubenswrapper[4776]: E1011 10:28:00.513540 4776 projected.go:194] Error preparing data for projected volume kube-api-access-plqrv for pod openshift-network-diagnostics/network-check-target-jdkgd: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:28:00.514365 master-2 kubenswrapper[4776]: E1011 10:28:00.513604 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv podName:f6543c6f-6f31-431e-9327-60c8cfd70c7e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.513581802 +0000 UTC m=+147.298008551 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-plqrv" (UniqueName: "kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv") pod "network-check-target-jdkgd" (UID: "f6543c6f-6f31-431e-9327-60c8cfd70c7e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 10:28:01.058234 master-2 kubenswrapper[4776]: I1011 10:28:01.058116 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:01.058518 master-2 kubenswrapper[4776]: E1011 10:28:01.058302 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:28:01.058518 master-2 kubenswrapper[4776]: I1011 10:28:01.058413 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:28:01.058919 master-2 kubenswrapper[4776]: E1011 10:28:01.058850 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:28:03.057765 master-2 kubenswrapper[4776]: I1011 10:28:03.057667 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:03.058332 master-2 kubenswrapper[4776]: E1011 10:28:03.057848 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jdkgd" podUID="f6543c6f-6f31-431e-9327-60c8cfd70c7e" Oct 11 10:28:03.058332 master-2 kubenswrapper[4776]: I1011 10:28:03.057698 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:28:03.058332 master-2 kubenswrapper[4776]: E1011 10:28:03.057992 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w52cn" podUID="35b21a7b-2a5a-4511-a2d5-d950752b4bda" Oct 11 10:28:03.092342 master-2 kubenswrapper[4776]: I1011 10:28:03.092264 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:28:03.111702 master-2 kubenswrapper[4776]: I1011 10:28:03.111625 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:28:04.490606 master-2 kubenswrapper[4776]: I1011 10:28:04.490189 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeReady" Oct 11 10:28:04.490606 master-2 kubenswrapper[4776]: I1011 10:28:04.490585 4776 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Oct 11 10:28:04.536736 master-2 kubenswrapper[4776]: I1011 10:28:04.536343 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf"] Oct 11 10:28:04.537124 master-2 kubenswrapper[4776]: I1011 10:28:04.537046 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb"] Oct 11 10:28:04.537305 master-2 kubenswrapper[4776]: I1011 10:28:04.537061 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.537537 master-2 kubenswrapper[4776]: I1011 10:28:04.537488 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.543743 master-2 kubenswrapper[4776]: I1011 10:28:04.542426 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Oct 11 10:28:04.543743 master-2 kubenswrapper[4776]: I1011 10:28:04.542514 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Oct 11 10:28:04.543743 master-2 kubenswrapper[4776]: I1011 10:28:04.542452 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Oct 11 10:28:04.543743 master-2 kubenswrapper[4776]: I1011 10:28:04.543188 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Oct 11 10:28:04.543743 master-2 kubenswrapper[4776]: I1011 10:28:04.543598 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 11 10:28:04.543743 master-2 kubenswrapper[4776]: I1011 10:28:04.543668 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 11 10:28:04.544492 master-2 kubenswrapper[4776]: I1011 10:28:04.544426 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Oct 11 10:28:04.547792 master-2 kubenswrapper[4776]: I1011 10:28:04.547745 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Oct 11 10:28:04.548311 master-2 kubenswrapper[4776]: I1011 10:28:04.548268 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-9dbb96f7-b88g6"] Oct 11 10:28:04.548907 master-2 kubenswrapper[4776]: I1011 10:28:04.548833 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.549031 master-2 kubenswrapper[4776]: I1011 10:28:04.548983 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj"] Oct 11 10:28:04.550459 master-2 kubenswrapper[4776]: I1011 10:28:04.549423 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.550459 master-2 kubenswrapper[4776]: I1011 10:28:04.549934 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh"] Oct 11 10:28:04.550736 master-2 kubenswrapper[4776]: I1011 10:28:04.550656 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.551526 master-2 kubenswrapper[4776]: I1011 10:28:04.551181 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj"] Oct 11 10:28:04.551526 master-2 kubenswrapper[4776]: I1011 10:28:04.551521 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.555642 master-2 kubenswrapper[4776]: I1011 10:28:04.552157 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd"] Oct 11 10:28:04.555642 master-2 kubenswrapper[4776]: I1011 10:28:04.552214 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 11 10:28:04.555642 master-2 kubenswrapper[4776]: I1011 10:28:04.554955 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr"] Oct 11 10:28:04.555642 master-2 kubenswrapper[4776]: I1011 10:28:04.555128 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2"] Oct 11 10:28:04.555642 master-2 kubenswrapper[4776]: I1011 10:28:04.555350 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:04.555642 master-2 kubenswrapper[4776]: I1011 10:28:04.555466 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:04.558500 master-2 kubenswrapper[4776]: I1011 10:28:04.555753 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.558500 master-2 kubenswrapper[4776]: I1011 10:28:04.558332 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7"] Oct 11 10:28:04.558669 master-2 kubenswrapper[4776]: I1011 10:28:04.558581 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.559983 master-2 kubenswrapper[4776]: I1011 10:28:04.559931 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 11 10:28:04.560124 master-2 kubenswrapper[4776]: I1011 10:28:04.560065 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 11 10:28:04.560347 master-2 kubenswrapper[4776]: I1011 10:28:04.560301 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Oct 11 10:28:04.560465 master-2 kubenswrapper[4776]: I1011 10:28:04.560395 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg"] Oct 11 10:28:04.560554 master-2 kubenswrapper[4776]: I1011 10:28:04.560532 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-7dcf5bd85b-6c2rl"] Oct 11 10:28:04.560776 master-2 kubenswrapper[4776]: I1011 10:28:04.560736 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.561000 master-2 kubenswrapper[4776]: I1011 10:28:04.560957 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.561423 master-2 kubenswrapper[4776]: I1011 10:28:04.561343 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-568c655666-84cp8"] Oct 11 10:28:04.561978 master-2 kubenswrapper[4776]: I1011 10:28:04.561915 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.563004 master-2 kubenswrapper[4776]: I1011 10:28:04.562935 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8"] Oct 11 10:28:04.563303 master-2 kubenswrapper[4776]: I1011 10:28:04.563236 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Oct 11 10:28:04.563429 master-2 kubenswrapper[4776]: I1011 10:28:04.563380 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Oct 11 10:28:04.563548 master-2 kubenswrapper[4776]: I1011 10:28:04.563503 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" Oct 11 10:28:04.563632 master-2 kubenswrapper[4776]: I1011 10:28:04.563400 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr"] Oct 11 10:28:04.564421 master-2 kubenswrapper[4776]: I1011 10:28:04.564368 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59"] Oct 11 10:28:04.564732 master-2 kubenswrapper[4776]: I1011 10:28:04.564672 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77"] Oct 11 10:28:04.565053 master-2 kubenswrapper[4776]: I1011 10:28:04.564998 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 11 10:28:04.565186 master-2 kubenswrapper[4776]: I1011 10:28:04.565111 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc"] Oct 11 10:28:04.565643 master-2 kubenswrapper[4776]: I1011 10:28:04.565591 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 11 10:28:04.565938 master-2 kubenswrapper[4776]: I1011 10:28:04.565882 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd"] Oct 11 10:28:04.566187 master-2 kubenswrapper[4776]: I1011 10:28:04.566139 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Oct 11 10:28:04.566355 master-2 kubenswrapper[4776]: I1011 10:28:04.566311 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q"] Oct 11 10:28:04.566544 master-2 kubenswrapper[4776]: I1011 10:28:04.566490 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 11 10:28:04.566868 master-2 kubenswrapper[4776]: I1011 10:28:04.566811 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp"] Oct 11 10:28:04.567207 master-2 kubenswrapper[4776]: I1011 10:28:04.567159 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.607498 master-2 kubenswrapper[4776]: I1011 10:28:04.607410 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv"] Oct 11 10:28:04.607931 master-2 kubenswrapper[4776]: I1011 10:28:04.607818 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.608090 master-2 kubenswrapper[4776]: I1011 10:28:04.607932 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.608090 master-2 kubenswrapper[4776]: I1011 10:28:04.607994 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.608090 master-2 kubenswrapper[4776]: I1011 10:28:04.607878 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.608326 master-2 kubenswrapper[4776]: I1011 10:28:04.607940 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.608439 master-2 kubenswrapper[4776]: I1011 10:28:04.608397 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.614669 master-2 kubenswrapper[4776]: I1011 10:28:04.612494 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:04.615574 master-2 kubenswrapper[4776]: I1011 10:28:04.615491 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs"] Oct 11 10:28:04.615856 master-2 kubenswrapper[4776]: I1011 10:28:04.615719 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.615856 master-2 kubenswrapper[4776]: I1011 10:28:04.615751 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.616115 master-2 kubenswrapper[4776]: I1011 10:28:04.616033 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54"] Oct 11 10:28:04.619503 master-2 kubenswrapper[4776]: I1011 10:28:04.616239 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.619503 master-2 kubenswrapper[4776]: I1011 10:28:04.617461 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv"] Oct 11 10:28:04.619503 master-2 kubenswrapper[4776]: I1011 10:28:04.617875 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.620447 master-2 kubenswrapper[4776]: I1011 10:28:04.620370 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 11 10:28:04.621152 master-2 kubenswrapper[4776]: I1011 10:28:04.621095 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 11 10:28:04.621152 master-2 kubenswrapper[4776]: I1011 10:28:04.621120 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.621445 master-2 kubenswrapper[4776]: I1011 10:28:04.621396 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 11 10:28:04.621557 master-2 kubenswrapper[4776]: I1011 10:28:04.621396 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588"] Oct 11 10:28:04.621772 master-2 kubenswrapper[4776]: I1011 10:28:04.621707 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 11 10:28:04.622010 master-2 kubenswrapper[4776]: I1011 10:28:04.621962 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622133 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622394 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622424 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622449 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622455 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622502 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622523 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622541 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622620 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.622940 master-2 kubenswrapper[4776]: I1011 10:28:04.622986 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.622715 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.622754 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.622823 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.622819 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.622871 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.623121 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.623600 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.623342 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4"] Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.622901 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 11 10:28:04.623949 master-2 kubenswrapper[4776]: I1011 10:28:04.623883 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.624420 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.625374 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.627482 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.628386 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz"] Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.629087 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.629371 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc"] Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.630011 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.630115 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.630150 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.630584 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.630779 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-v6dfc"] Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.630804 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.631568 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.631721 master-2 kubenswrapper[4776]: I1011 10:28:04.631582 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.631865 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-s5r5b"] Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.632019 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.632230 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.632236 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.632378 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.632396 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-5r2t9"] Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.632539 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 11 10:28:04.635145 master-2 kubenswrapper[4776]: I1011 10:28:04.633005 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.637056 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-7769d9677-wh775"] Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.637231 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.637426 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.637483 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.637554 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.638017 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.638213 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 11 10:28:04.638467 master-2 kubenswrapper[4776]: I1011 10:28:04.638408 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 11 10:28:04.640274 master-2 kubenswrapper[4776]: I1011 10:28:04.638551 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 11 10:28:04.640274 master-2 kubenswrapper[4776]: I1011 10:28:04.638983 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:04.640274 master-2 kubenswrapper[4776]: I1011 10:28:04.639780 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf"] Oct 11 10:28:04.641102 master-2 kubenswrapper[4776]: I1011 10:28:04.641056 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 11 10:28:04.641408 master-2 kubenswrapper[4776]: I1011 10:28:04.641279 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 11 10:28:04.641639 master-2 kubenswrapper[4776]: I1011 10:28:04.641550 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj"] Oct 11 10:28:04.641898 master-2 kubenswrapper[4776]: I1011 10:28:04.641854 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.645090 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd"] Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.645200 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.645223 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.645241 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.645266 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646370 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7"] Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646452 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646507 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/66dee5be-e631-462d-8a2c-51a2031a83a2-images\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646534 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dptq\" (UniqueName: \"kubernetes.io/projected/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-kube-api-access-2dptq\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646562 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646586 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66dee5be-e631-462d-8a2c-51a2031a83a2-config\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646630 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-auth-proxy-config\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646653 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646536 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646818 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646611 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.646706 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc6zm\" (UniqueName: \"kubernetes.io/projected/66dee5be-e631-462d-8a2c-51a2031a83a2-kube-api-access-gc6zm\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.647331 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj"] Oct 11 10:28:04.649301 master-2 kubenswrapper[4776]: I1011 10:28:04.648590 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr"] Oct 11 10:28:04.650295 master-2 kubenswrapper[4776]: I1011 10:28:04.649500 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-568c655666-84cp8"] Oct 11 10:28:04.651205 master-2 kubenswrapper[4776]: I1011 10:28:04.651161 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Oct 11 10:28:04.651405 master-2 kubenswrapper[4776]: I1011 10:28:04.651367 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 11 10:28:04.651661 master-2 kubenswrapper[4776]: I1011 10:28:04.651626 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 11 10:28:04.652134 master-2 kubenswrapper[4776]: I1011 10:28:04.652096 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 11 10:28:04.652340 master-2 kubenswrapper[4776]: I1011 10:28:04.652301 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 11 10:28:04.652637 master-2 kubenswrapper[4776]: I1011 10:28:04.652595 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.652865 master-2 kubenswrapper[4776]: I1011 10:28:04.652798 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd"] Oct 11 10:28:04.652980 master-2 kubenswrapper[4776]: I1011 10:28:04.652891 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2"] Oct 11 10:28:04.653026 master-2 kubenswrapper[4776]: I1011 10:28:04.652982 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8"] Oct 11 10:28:04.653785 master-2 kubenswrapper[4776]: I1011 10:28:04.653207 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Oct 11 10:28:04.653785 master-2 kubenswrapper[4776]: I1011 10:28:04.653312 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 11 10:28:04.653785 master-2 kubenswrapper[4776]: I1011 10:28:04.653444 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 11 10:28:04.653785 master-2 kubenswrapper[4776]: I1011 10:28:04.653539 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 11 10:28:04.653785 master-2 kubenswrapper[4776]: I1011 10:28:04.653778 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 11 10:28:04.653785 master-2 kubenswrapper[4776]: I1011 10:28:04.653780 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 11 10:28:04.654523 master-2 kubenswrapper[4776]: I1011 10:28:04.653791 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Oct 11 10:28:04.654523 master-2 kubenswrapper[4776]: I1011 10:28:04.653860 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Oct 11 10:28:04.654523 master-2 kubenswrapper[4776]: I1011 10:28:04.653961 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Oct 11 10:28:04.654523 master-2 kubenswrapper[4776]: I1011 10:28:04.654300 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 11 10:28:04.654523 master-2 kubenswrapper[4776]: I1011 10:28:04.654459 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 11 10:28:04.655578 master-2 kubenswrapper[4776]: I1011 10:28:04.654862 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp"] Oct 11 10:28:04.655927 master-2 kubenswrapper[4776]: I1011 10:28:04.655846 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77"] Oct 11 10:28:04.656340 master-2 kubenswrapper[4776]: I1011 10:28:04.656283 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Oct 11 10:28:04.656615 master-2 kubenswrapper[4776]: I1011 10:28:04.656569 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Oct 11 10:28:04.656949 master-2 kubenswrapper[4776]: I1011 10:28:04.656733 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-9dbb96f7-b88g6"] Oct 11 10:28:04.657780 master-2 kubenswrapper[4776]: I1011 10:28:04.657729 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc"] Oct 11 10:28:04.658464 master-2 kubenswrapper[4776]: I1011 10:28:04.658412 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-5mn8b"] Oct 11 10:28:04.658573 master-2 kubenswrapper[4776]: I1011 10:28:04.658465 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 11 10:28:04.658573 master-2 kubenswrapper[4776]: I1011 10:28:04.658496 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.658842 master-2 kubenswrapper[4776]: I1011 10:28:04.658828 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.659118 master-2 kubenswrapper[4776]: I1011 10:28:04.659069 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.659226 master-2 kubenswrapper[4776]: I1011 10:28:04.659176 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 11 10:28:04.659392 master-2 kubenswrapper[4776]: I1011 10:28:04.659348 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588"] Oct 11 10:28:04.659745 master-2 kubenswrapper[4776]: I1011 10:28:04.659642 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 11 10:28:04.659911 master-2 kubenswrapper[4776]: I1011 10:28:04.659877 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 11 10:28:04.660754 master-2 kubenswrapper[4776]: I1011 10:28:04.660656 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 11 10:28:04.660896 master-2 kubenswrapper[4776]: I1011 10:28:04.660796 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 11 10:28:04.660896 master-2 kubenswrapper[4776]: I1011 10:28:04.660825 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb"] Oct 11 10:28:04.661372 master-2 kubenswrapper[4776]: I1011 10:28:04.661312 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-7dcf5bd85b-6c2rl"] Oct 11 10:28:04.662565 master-2 kubenswrapper[4776]: I1011 10:28:04.662519 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 11 10:28:04.662765 master-2 kubenswrapper[4776]: I1011 10:28:04.662596 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 11 10:28:04.663558 master-2 kubenswrapper[4776]: I1011 10:28:04.663436 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg"] Oct 11 10:28:04.665592 master-2 kubenswrapper[4776]: I1011 10:28:04.665411 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59"] Oct 11 10:28:04.666352 master-2 kubenswrapper[4776]: I1011 10:28:04.666283 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 11 10:28:04.667481 master-2 kubenswrapper[4776]: I1011 10:28:04.667409 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 11 10:28:04.670397 master-2 kubenswrapper[4776]: I1011 10:28:04.670328 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q"] Oct 11 10:28:04.673141 master-2 kubenswrapper[4776]: I1011 10:28:04.673090 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv"] Oct 11 10:28:04.673625 master-2 kubenswrapper[4776]: I1011 10:28:04.673577 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 11 10:28:04.674303 master-2 kubenswrapper[4776]: I1011 10:28:04.674267 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs"] Oct 11 10:28:04.675163 master-2 kubenswrapper[4776]: I1011 10:28:04.675127 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr"] Oct 11 10:28:04.675989 master-2 kubenswrapper[4776]: I1011 10:28:04.675955 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc"] Oct 11 10:28:04.676917 master-2 kubenswrapper[4776]: I1011 10:28:04.676857 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54"] Oct 11 10:28:04.677777 master-2 kubenswrapper[4776]: I1011 10:28:04.677738 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz"] Oct 11 10:28:04.678933 master-2 kubenswrapper[4776]: I1011 10:28:04.678870 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4"] Oct 11 10:28:04.679314 master-2 kubenswrapper[4776]: I1011 10:28:04.679280 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.680280 master-2 kubenswrapper[4776]: I1011 10:28:04.680257 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh"] Oct 11 10:28:04.681904 master-2 kubenswrapper[4776]: I1011 10:28:04.681868 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-s5r5b"] Oct 11 10:28:04.682878 master-2 kubenswrapper[4776]: I1011 10:28:04.682827 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-7769d9677-wh775"] Oct 11 10:28:04.684014 master-2 kubenswrapper[4776]: I1011 10:28:04.683879 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-5r2t9"] Oct 11 10:28:04.700337 master-2 kubenswrapper[4776]: I1011 10:28:04.700299 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Oct 11 10:28:04.719969 master-2 kubenswrapper[4776]: I1011 10:28:04.719932 4776 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Oct 11 10:28:04.740737 master-2 kubenswrapper[4776]: I1011 10:28:04.740659 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Oct 11 10:28:04.747414 master-2 kubenswrapper[4776]: I1011 10:28:04.747374 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8050d30-444b-40a5-829c-1e3b788910a0-serving-cert\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.747489 master-2 kubenswrapper[4776]: I1011 10:28:04.747421 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-profile-collector-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.747489 master-2 kubenswrapper[4776]: I1011 10:28:04.747449 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-serving-cert\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.747489 master-2 kubenswrapper[4776]: I1011 10:28:04.747480 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.747580 master-2 kubenswrapper[4776]: I1011 10:28:04.747503 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59763d5b-237f-4095-bf52-86bb0154381c-serving-cert\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.747580 master-2 kubenswrapper[4776]: I1011 10:28:04.747526 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxlw\" (UniqueName: \"kubernetes.io/projected/8757af56-20fb-439e-adba-7e4e50378936-kube-api-access-xpxlw\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.747580 master-2 kubenswrapper[4776]: I1011 10:28:04.747549 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w627\" (UniqueName: \"kubernetes.io/projected/88129ec6-6f99-42a1-842a-6a965c6b58fe-kube-api-access-4w627\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.747580 master-2 kubenswrapper[4776]: I1011 10:28:04.747573 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.747712 master-2 kubenswrapper[4776]: I1011 10:28:04.747595 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.747712 master-2 kubenswrapper[4776]: I1011 10:28:04.747619 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxlns\" (UniqueName: \"kubernetes.io/projected/b16a4f10-c724-43cf-acd4-b3f5aa575653-kube-api-access-mxlns\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.747712 master-2 kubenswrapper[4776]: I1011 10:28:04.747643 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5wnj\" (UniqueName: \"kubernetes.io/projected/e4536c84-d8f3-4808-bf8b-9b40695f46de-kube-api-access-x5wnj\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.747712 master-2 kubenswrapper[4776]: E1011 10:28:04.747659 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:04.747712 master-2 kubenswrapper[4776]: I1011 10:28:04.747666 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-profile-collector-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.747841 master-2 kubenswrapper[4776]: E1011 10:28:04.747721 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert podName:6b8dc5b8-3c48-4dba-9992-6e269ca133f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.247701627 +0000 UTC m=+120.032128336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert") pod "cluster-autoscaler-operator-7ff449c7c5-cfvjb" (UID: "6b8dc5b8-3c48-4dba-9992-6e269ca133f1") : secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:04.747841 master-2 kubenswrapper[4776]: I1011 10:28:04.747745 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.747841 master-2 kubenswrapper[4776]: I1011 10:28:04.747772 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-bound-sa-token\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.747841 master-2 kubenswrapper[4776]: I1011 10:28:04.747802 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.747841 master-2 kubenswrapper[4776]: I1011 10:28:04.747820 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-service-ca-bundle\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.747841 master-2 kubenswrapper[4776]: I1011 10:28:04.747836 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-trusted-ca\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747854 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747876 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-trusted-ca-bundle\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747895 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/548333d7-2374-4c38-b4fd-45c2bee2ac4e-images\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747915 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmd5r\" (UniqueName: \"kubernetes.io/projected/b562963f-7112-411a-a64c-3b8eba909c59-kube-api-access-rmd5r\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747932 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747950 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.747999 master-2 kubenswrapper[4776]: I1011 10:28:04.747971 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw9qb\" (UniqueName: \"kubernetes.io/projected/e20ebc39-150b-472a-bb22-328d8f5db87b-kube-api-access-pw9qb\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.747993 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-config\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748033 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e487f283-7482-463c-90b6-a812e00d0e35-config\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748051 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e487f283-7482-463c-90b6-a812e00d0e35-kube-api-access\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748072 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/dbaa6ca7-9865-42f6-8030-2decf702caa1-telemetry-config\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748093 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748123 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlz8s\" (UniqueName: \"kubernetes.io/projected/d4354488-1b32-422d-bb06-767a952192a5-kube-api-access-tlz8s\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748166 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7004f3ff-6db8-446d-94c1-1223e975299d-serving-cert\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.748186 master-2 kubenswrapper[4776]: I1011 10:28:04.748193 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qxt8\" (UniqueName: \"kubernetes.io/projected/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-kube-api-access-7qxt8\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748215 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748245 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw2ll\" (UniqueName: \"kubernetes.io/projected/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-kube-api-access-xw2ll\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748267 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748293 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6967590c-695e-4e20-964b-0c643abdf367-config\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748314 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjs47\" (UniqueName: \"kubernetes.io/projected/9d362fb9-48e4-4d72-a940-ec6c9c051fac-kube-api-access-hjs47\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748338 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58aef476-6586-47bb-bf45-dbeccac6271a-serving-cert\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748361 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05cf2994-c049-4f42-b2d8-83b23e7e763a-serving-cert\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748392 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29krx\" (UniqueName: \"kubernetes.io/projected/eba1e82e-9f3e-4273-836e-9407cc394b10-kube-api-access-29krx\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748422 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.748510 master-2 kubenswrapper[4776]: I1011 10:28:04.748489 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b562963f-7112-411a-a64c-3b8eba909c59-bound-sa-token\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748522 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-var-run-resolv-conf\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748560 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plpwg\" (UniqueName: \"kubernetes.io/projected/64310b0b-bae1-4ad3-b106-6d59d47d29b2-kube-api-access-plpwg\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748595 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxt2q\" (UniqueName: \"kubernetes.io/projected/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-kube-api-access-mxt2q\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748645 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/66dee5be-e631-462d-8a2c-51a2031a83a2-images\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748728 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05cf2994-c049-4f42-b2d8-83b23e7e763a-config\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748873 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-service-ca\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748912 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-client\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.748949 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb54g\" (UniqueName: \"kubernetes.io/projected/dbaa6ca7-9865-42f6-8030-2decf702caa1-kube-api-access-vb54g\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.749032 master-2 kubenswrapper[4776]: I1011 10:28:04.749012 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5q65\" (UniqueName: \"kubernetes.io/projected/a0b806b9-13ff-45fa-afba-5d0c89eac7df-kube-api-access-g5q65\") pod \"csi-snapshot-controller-operator-7ff96dd767-vv9w8\" (UID: \"a0b806b9-13ff-45fa-afba-5d0c89eac7df\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" Oct 11 10:28:04.749307 master-2 kubenswrapper[4776]: I1011 10:28:04.749073 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66dee5be-e631-462d-8a2c-51a2031a83a2-config\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.749307 master-2 kubenswrapper[4776]: I1011 10:28:04.749108 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88129ec6-6f99-42a1-842a-6a965c6b58fe-serving-cert\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.749612 master-2 kubenswrapper[4776]: I1011 10:28:04.749456 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-auth-proxy-config\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.749685 master-2 kubenswrapper[4776]: I1011 10:28:04.749604 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/66dee5be-e631-462d-8a2c-51a2031a83a2-images\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.749685 master-2 kubenswrapper[4776]: I1011 10:28:04.749643 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6967590c-695e-4e20-964b-0c643abdf367-kube-api-access\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.749780 master-2 kubenswrapper[4776]: I1011 10:28:04.749750 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-ca-bundle\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.749827 master-2 kubenswrapper[4776]: I1011 10:28:04.749801 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88129ec6-6f99-42a1-842a-6a965c6b58fe-config\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.749861 master-2 kubenswrapper[4776]: I1011 10:28:04.749831 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwgj\" (UniqueName: \"kubernetes.io/projected/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-kube-api-access-xmwgj\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.749892 master-2 kubenswrapper[4776]: I1011 10:28:04.749881 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f54r2\" (UniqueName: \"kubernetes.io/projected/05cf2994-c049-4f42-b2d8-83b23e7e763a-kube-api-access-f54r2\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.749951 master-2 kubenswrapper[4776]: I1011 10:28:04.749928 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcdgf\" (UniqueName: \"kubernetes.io/projected/f8050d30-444b-40a5-829c-1e3b788910a0-kube-api-access-rcdgf\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.749975 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750012 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-config\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750081 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66dee5be-e631-462d-8a2c-51a2031a83a2-config\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750120 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59763d5b-237f-4095-bf52-86bb0154381c-trusted-ca-bundle\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750244 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-config\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750287 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58aef476-6586-47bb-bf45-dbeccac6271a-config\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750411 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vc8b\" (UniqueName: \"kubernetes.io/projected/e540333c-4b4d-439e-a82a-cd3a97c95a43-kube-api-access-2vc8b\") pod \"cluster-storage-operator-56d4b95494-9fbb2\" (UID: \"e540333c-4b4d-439e-a82a-cd3a97c95a43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750455 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d362fb9-48e4-4d72-a940-ec6c9c051fac-available-featuregates\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750584 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750626 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b7hm\" (UniqueName: \"kubernetes.io/projected/cbf33a7e-abea-411d-9a19-85cfe67debe3-kube-api-access-9b7hm\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750648 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b16a4f10-c724-43cf-acd4-b3f5aa575653-trusted-ca\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750727 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc6zm\" (UniqueName: \"kubernetes.io/projected/66dee5be-e631-462d-8a2c-51a2031a83a2-kube-api-access-gc6zm\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750747 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwznd\" (UniqueName: \"kubernetes.io/projected/59763d5b-237f-4095-bf52-86bb0154381c-kube-api-access-hwznd\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750766 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59763d5b-237f-4095-bf52-86bb0154381c-service-ca-bundle\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.753282 master-2 kubenswrapper[4776]: I1011 10:28:04.750783 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-resolv-conf\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.750802 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/548333d7-2374-4c38-b4fd-45c2bee2ac4e-config\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.750819 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.750745 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-auth-proxy-config\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.750883 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfh8n\" (UniqueName: \"kubernetes.io/projected/548333d7-2374-4c38-b4fd-45c2bee2ac4e-kube-api-access-dfh8n\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.750911 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e4536c84-d8f3-4808-bf8b-9b40695f46de-images\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751150 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751195 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f979h\" (UniqueName: \"kubernetes.io/projected/e3281eb7-fb96-4bae-8c55-b79728d426b0-kube-api-access-f979h\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751221 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bzgn\" (UniqueName: \"kubernetes.io/projected/7004f3ff-6db8-446d-94c1-1223e975299d-kube-api-access-8bzgn\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751237 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08b7d4e3-1682-4a3b-a757-84ded3a16764-auth-proxy-config\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751276 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e540333c-4b4d-439e-a82a-cd3a97c95a43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-56d4b95494-9fbb2\" (UID: \"e540333c-4b4d-439e-a82a-cd3a97c95a43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751297 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eba1e82e-9f3e-4273-836e-9407cc394b10-cco-trusted-ca\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751322 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6967590c-695e-4e20-964b-0c643abdf367-serving-cert\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751338 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-ca\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.754484 master-2 kubenswrapper[4776]: I1011 10:28:04.751356 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751382 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-operand-assets\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751399 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krh5m\" (UniqueName: \"kubernetes.io/projected/893af718-1fec-4b8b-8349-d85f978f4140-kube-api-access-krh5m\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751421 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8050d30-444b-40a5-829c-1e3b788910a0-config\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751437 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08b7d4e3-1682-4a3b-a757-84ded3a16764-config\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751452 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d362fb9-48e4-4d72-a940-ec6c9c051fac-serving-cert\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751467 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-trusted-ca\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751512 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtkpx\" (UniqueName: \"kubernetes.io/projected/7e860f23-9dae-4606-9426-0edec38a332f-kube-api-access-xtkpx\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751538 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4536c84-d8f3-4808-bf8b-9b40695f46de-auth-proxy-config\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751564 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751585 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58aef476-6586-47bb-bf45-dbeccac6271a-kube-api-access\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751612 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751648 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dptq\" (UniqueName: \"kubernetes.io/projected/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-kube-api-access-2dptq\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751665 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b562963f-7112-411a-a64c-3b8eba909c59-trusted-ca\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.755173 master-2 kubenswrapper[4776]: I1011 10:28:04.751710 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6whh\" (UniqueName: \"kubernetes.io/projected/7652e0ca-2d18-48c7-80e0-f4a936038377-kube-api-access-t6whh\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: I1011 10:28:04.751726 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: E1011 10:28:04.751791 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: E1011 10:28:04.751831 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.251816956 +0000 UTC m=+120.036243665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: I1011 10:28:04.751942 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: I1011 10:28:04.751971 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwp57\" (UniqueName: \"kubernetes.io/projected/08b7d4e3-1682-4a3b-a757-84ded3a16764-kube-api-access-fwp57\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: I1011 10:28:04.751987 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-serving-cert\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: I1011 10:28:04.752006 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/59763d5b-237f-4095-bf52-86bb0154381c-snapshots\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: I1011 10:28:04.752019 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e487f283-7482-463c-90b6-a812e00d0e35-serving-cert\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: E1011 10:28:04.752089 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:04.755834 master-2 kubenswrapper[4776]: E1011 10:28:04.752110 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.252101954 +0000 UTC m=+120.036528663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:04.760183 master-2 kubenswrapper[4776]: I1011 10:28:04.760152 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Oct 11 10:28:04.780631 master-2 kubenswrapper[4776]: I1011 10:28:04.779967 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 11 10:28:04.801761 master-2 kubenswrapper[4776]: I1011 10:28:04.801733 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.819938 master-2 kubenswrapper[4776]: I1011 10:28:04.819895 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 11 10:28:04.839241 master-2 kubenswrapper[4776]: I1011 10:28:04.839207 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 11 10:28:04.853745 master-2 kubenswrapper[4776]: I1011 10:28:04.853711 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmd5r\" (UniqueName: \"kubernetes.io/projected/b562963f-7112-411a-a64c-3b8eba909c59-kube-api-access-rmd5r\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.853803 master-2 kubenswrapper[4776]: I1011 10:28:04.853748 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.853803 master-2 kubenswrapper[4776]: I1011 10:28:04.853770 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.853803 master-2 kubenswrapper[4776]: I1011 10:28:04.853787 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw9qb\" (UniqueName: \"kubernetes.io/projected/e20ebc39-150b-472a-bb22-328d8f5db87b-kube-api-access-pw9qb\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:04.853803 master-2 kubenswrapper[4776]: I1011 10:28:04.853802 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-config\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: I1011 10:28:04.853819 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e487f283-7482-463c-90b6-a812e00d0e35-config\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: I1011 10:28:04.853834 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e487f283-7482-463c-90b6-a812e00d0e35-kube-api-access\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: I1011 10:28:04.853852 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/dbaa6ca7-9865-42f6-8030-2decf702caa1-telemetry-config\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: I1011 10:28:04.853869 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: I1011 10:28:04.853884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlz8s\" (UniqueName: \"kubernetes.io/projected/d4354488-1b32-422d-bb06-767a952192a5-kube-api-access-tlz8s\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: E1011 10:28:04.853887 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:04.853914 master-2 kubenswrapper[4776]: I1011 10:28:04.853900 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7004f3ff-6db8-446d-94c1-1223e975299d-serving-cert\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: I1011 10:28:04.853931 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qxt8\" (UniqueName: \"kubernetes.io/projected/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-kube-api-access-7qxt8\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: E1011 10:28:04.853946 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.353927336 +0000 UTC m=+120.138354045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: I1011 10:28:04.853971 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: I1011 10:28:04.853998 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw2ll\" (UniqueName: \"kubernetes.io/projected/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-kube-api-access-xw2ll\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: E1011 10:28:04.854005 4776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: I1011 10:28:04.854016 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: I1011 10:28:04.854035 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6967590c-695e-4e20-964b-0c643abdf367-config\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: E1011 10:28:04.854066 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert podName:eba1e82e-9f3e-4273-836e-9407cc394b10 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.354041219 +0000 UTC m=+120.138467928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-8d7xr" (UID: "eba1e82e-9f3e-4273-836e-9407cc394b10") : secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: I1011 10:28:04.854093 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjs47\" (UniqueName: \"kubernetes.io/projected/9d362fb9-48e4-4d72-a940-ec6c9c051fac-kube-api-access-hjs47\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.854102 master-2 kubenswrapper[4776]: E1011 10:28:04.854103 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: I1011 10:28:04.854120 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58aef476-6586-47bb-bf45-dbeccac6271a-serving-cert\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: I1011 10:28:04.854138 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05cf2994-c049-4f42-b2d8-83b23e7e763a-serving-cert\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: E1011 10:28:04.854165 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.354141352 +0000 UTC m=+120.138568061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "node-tuning-operator-tls" not found Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: I1011 10:28:04.854196 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29krx\" (UniqueName: \"kubernetes.io/projected/eba1e82e-9f3e-4273-836e-9407cc394b10-kube-api-access-29krx\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: E1011 10:28:04.854224 4776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: E1011 10:28:04.854279 4776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: E1011 10:28:04.854297 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls podName:b562963f-7112-411a-a64c-3b8eba909c59 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.354276777 +0000 UTC m=+120.138703486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-mwbsr" (UID: "b562963f-7112-411a-a64c-3b8eba909c59") : secret "image-registry-operator-tls" not found Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: E1011 10:28:04.854312 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls podName:6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.354305958 +0000 UTC m=+120.138732667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls") pod "ingress-operator-766ddf4575-wf7mj" (UID: "6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c") : secret "metrics-tls" not found Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: I1011 10:28:04.854230 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: I1011 10:28:04.854359 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b562963f-7112-411a-a64c-3b8eba909c59-bound-sa-token\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.854391 master-2 kubenswrapper[4776]: I1011 10:28:04.854382 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-var-run-resolv-conf\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: E1011 10:28:04.854397 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854422 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-var-run-resolv-conf\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: E1011 10:28:04.854441 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.354427541 +0000 UTC m=+120.138854250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854403 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plpwg\" (UniqueName: \"kubernetes.io/projected/64310b0b-bae1-4ad3-b106-6d59d47d29b2-kube-api-access-plpwg\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854472 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxt2q\" (UniqueName: \"kubernetes.io/projected/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-kube-api-access-mxt2q\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854491 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05cf2994-c049-4f42-b2d8-83b23e7e763a-config\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854505 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-service-ca\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854529 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-client\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854547 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb54g\" (UniqueName: \"kubernetes.io/projected/dbaa6ca7-9865-42f6-8030-2decf702caa1-kube-api-access-vb54g\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854562 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5q65\" (UniqueName: \"kubernetes.io/projected/a0b806b9-13ff-45fa-afba-5d0c89eac7df-kube-api-access-g5q65\") pod \"csi-snapshot-controller-operator-7ff96dd767-vv9w8\" (UID: \"a0b806b9-13ff-45fa-afba-5d0c89eac7df\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854578 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88129ec6-6f99-42a1-842a-6a965c6b58fe-serving-cert\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854609 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6967590c-695e-4e20-964b-0c643abdf367-kube-api-access\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854658 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-ca-bundle\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854689 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88129ec6-6f99-42a1-842a-6a965c6b58fe-config\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.854752 master-2 kubenswrapper[4776]: I1011 10:28:04.854706 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmwgj\" (UniqueName: \"kubernetes.io/projected/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-kube-api-access-xmwgj\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854722 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f54r2\" (UniqueName: \"kubernetes.io/projected/05cf2994-c049-4f42-b2d8-83b23e7e763a-kube-api-access-f54r2\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854726 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6967590c-695e-4e20-964b-0c643abdf367-config\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854738 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcdgf\" (UniqueName: \"kubernetes.io/projected/f8050d30-444b-40a5-829c-1e3b788910a0-kube-api-access-rcdgf\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854769 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854790 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-config\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854807 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59763d5b-237f-4095-bf52-86bb0154381c-trusted-ca-bundle\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854822 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-ca-bundle\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854822 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-config\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.855159 master-2 kubenswrapper[4776]: I1011 10:28:04.854788 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e487f283-7482-463c-90b6-a812e00d0e35-config\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.855467 master-2 kubenswrapper[4776]: I1011 10:28:04.855439 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-config\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.855514 master-2 kubenswrapper[4776]: I1011 10:28:04.855493 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88129ec6-6f99-42a1-842a-6a965c6b58fe-config\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.855595 master-2 kubenswrapper[4776]: I1011 10:28:04.855575 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58aef476-6586-47bb-bf45-dbeccac6271a-config\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.855635 master-2 kubenswrapper[4776]: I1011 10:28:04.855606 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vc8b\" (UniqueName: \"kubernetes.io/projected/e540333c-4b4d-439e-a82a-cd3a97c95a43-kube-api-access-2vc8b\") pod \"cluster-storage-operator-56d4b95494-9fbb2\" (UID: \"e540333c-4b4d-439e-a82a-cd3a97c95a43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:04.855718 master-2 kubenswrapper[4776]: I1011 10:28:04.855627 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d362fb9-48e4-4d72-a940-ec6c9c051fac-available-featuregates\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.855765 master-2 kubenswrapper[4776]: E1011 10:28:04.855753 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:04.855799 master-2 kubenswrapper[4776]: E1011 10:28:04.855784 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.355774299 +0000 UTC m=+120.140201008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:04.856495 master-2 kubenswrapper[4776]: I1011 10:28:04.856448 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05cf2994-c049-4f42-b2d8-83b23e7e763a-config\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.856546 master-2 kubenswrapper[4776]: I1011 10:28:04.856480 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-config\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.856844 master-2 kubenswrapper[4776]: I1011 10:28:04.856818 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:04.856900 master-2 kubenswrapper[4776]: I1011 10:28:04.856855 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b7hm\" (UniqueName: \"kubernetes.io/projected/cbf33a7e-abea-411d-9a19-85cfe67debe3-kube-api-access-9b7hm\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:04.856900 master-2 kubenswrapper[4776]: I1011 10:28:04.856894 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b16a4f10-c724-43cf-acd4-b3f5aa575653-trusted-ca\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.856971 master-2 kubenswrapper[4776]: I1011 10:28:04.856920 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/dbaa6ca7-9865-42f6-8030-2decf702caa1-telemetry-config\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.856971 master-2 kubenswrapper[4776]: I1011 10:28:04.856935 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lznwk\" (UniqueName: \"kubernetes.io/projected/18ca0678-0b0d-4d5d-bc50-a0a098301f38-kube-api-access-lznwk\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.856971 master-2 kubenswrapper[4776]: E1011 10:28:04.856955 4776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:04.857075 master-2 kubenswrapper[4776]: I1011 10:28:04.856987 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwznd\" (UniqueName: \"kubernetes.io/projected/59763d5b-237f-4095-bf52-86bb0154381c-kube-api-access-hwznd\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.857075 master-2 kubenswrapper[4776]: E1011 10:28:04.857053 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls podName:893af718-1fec-4b8b-8349-d85f978f4140 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.357008926 +0000 UTC m=+120.141435695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls") pod "dns-operator-7769d9677-wh775" (UID: "893af718-1fec-4b8b-8349-d85f978f4140") : secret "metrics-tls" not found Oct 11 10:28:04.857142 master-2 kubenswrapper[4776]: I1011 10:28:04.857120 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/18ca0678-0b0d-4d5d-bc50-a0a098301f38-iptables-alerter-script\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.857228 master-2 kubenswrapper[4776]: I1011 10:28:04.857187 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-config\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.857228 master-2 kubenswrapper[4776]: I1011 10:28:04.857204 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58aef476-6586-47bb-bf45-dbeccac6271a-config\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.857373 master-2 kubenswrapper[4776]: I1011 10:28:04.857235 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59763d5b-237f-4095-bf52-86bb0154381c-service-ca-bundle\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.857373 master-2 kubenswrapper[4776]: I1011 10:28:04.857319 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-resolv-conf\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.857465 master-2 kubenswrapper[4776]: I1011 10:28:04.857423 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/548333d7-2374-4c38-b4fd-45c2bee2ac4e-config\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.857579 master-2 kubenswrapper[4776]: I1011 10:28:04.857430 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-resolv-conf\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.857639 master-2 kubenswrapper[4776]: I1011 10:28:04.857292 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-service-ca\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.857786 master-2 kubenswrapper[4776]: I1011 10:28:04.857740 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.857933 master-2 kubenswrapper[4776]: E1011 10:28:04.857911 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:04.857981 master-2 kubenswrapper[4776]: E1011 10:28:04.857964 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.357950752 +0000 UTC m=+120.142377471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:04.858062 master-2 kubenswrapper[4776]: I1011 10:28:04.858031 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59763d5b-237f-4095-bf52-86bb0154381c-service-ca-bundle\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.858167 master-2 kubenswrapper[4776]: I1011 10:28:04.857852 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfh8n\" (UniqueName: \"kubernetes.io/projected/548333d7-2374-4c38-b4fd-45c2bee2ac4e-kube-api-access-dfh8n\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.858343 master-2 kubenswrapper[4776]: I1011 10:28:04.858269 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e4536c84-d8f3-4808-bf8b-9b40695f46de-images\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.858386 master-2 kubenswrapper[4776]: I1011 10:28:04.858355 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/548333d7-2374-4c38-b4fd-45c2bee2ac4e-config\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.858533 master-2 kubenswrapper[4776]: I1011 10:28:04.858501 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7004f3ff-6db8-446d-94c1-1223e975299d-serving-cert\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.858748 master-2 kubenswrapper[4776]: I1011 10:28:04.858722 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d362fb9-48e4-4d72-a940-ec6c9c051fac-available-featuregates\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.859101 master-2 kubenswrapper[4776]: I1011 10:28:04.858687 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59763d5b-237f-4095-bf52-86bb0154381c-trusted-ca-bundle\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.859213 master-2 kubenswrapper[4776]: I1011 10:28:04.859185 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e4536c84-d8f3-4808-bf8b-9b40695f46de-images\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.859371 master-2 kubenswrapper[4776]: I1011 10:28:04.859320 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:04.859555 master-2 kubenswrapper[4776]: E1011 10:28:04.859522 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:04.859744 master-2 kubenswrapper[4776]: E1011 10:28:04.859712 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.359568968 +0000 UTC m=+120.143995677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:04.859744 master-2 kubenswrapper[4776]: I1011 10:28:04.859520 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f979h\" (UniqueName: \"kubernetes.io/projected/e3281eb7-fb96-4bae-8c55-b79728d426b0-kube-api-access-f979h\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.859839 master-2 kubenswrapper[4776]: I1011 10:28:04.859177 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58aef476-6586-47bb-bf45-dbeccac6271a-serving-cert\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.859962 master-2 kubenswrapper[4776]: I1011 10:28:04.859830 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bzgn\" (UniqueName: \"kubernetes.io/projected/7004f3ff-6db8-446d-94c1-1223e975299d-kube-api-access-8bzgn\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.860007 master-2 kubenswrapper[4776]: I1011 10:28:04.859978 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08b7d4e3-1682-4a3b-a757-84ded3a16764-auth-proxy-config\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.860188 master-2 kubenswrapper[4776]: I1011 10:28:04.860023 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e540333c-4b4d-439e-a82a-cd3a97c95a43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-56d4b95494-9fbb2\" (UID: \"e540333c-4b4d-439e-a82a-cd3a97c95a43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:04.860223 master-2 kubenswrapper[4776]: I1011 10:28:04.860198 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eba1e82e-9f3e-4273-836e-9407cc394b10-cco-trusted-ca\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.860250 master-2 kubenswrapper[4776]: I1011 10:28:04.860231 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6967590c-695e-4e20-964b-0c643abdf367-serving-cert\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.860348 master-2 kubenswrapper[4776]: I1011 10:28:04.860264 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-ca\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.860512 master-2 kubenswrapper[4776]: I1011 10:28:04.860360 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:04.860512 master-2 kubenswrapper[4776]: I1011 10:28:04.860522 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-operand-assets\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.860616 master-2 kubenswrapper[4776]: I1011 10:28:04.860558 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krh5m\" (UniqueName: \"kubernetes.io/projected/893af718-1fec-4b8b-8349-d85f978f4140-kube-api-access-krh5m\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:04.860616 master-2 kubenswrapper[4776]: I1011 10:28:04.860586 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8050d30-444b-40a5-829c-1e3b788910a0-config\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.860667 master-2 kubenswrapper[4776]: I1011 10:28:04.860616 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08b7d4e3-1682-4a3b-a757-84ded3a16764-config\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.860667 master-2 kubenswrapper[4776]: I1011 10:28:04.860648 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d362fb9-48e4-4d72-a940-ec6c9c051fac-serving-cert\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.860888 master-2 kubenswrapper[4776]: I1011 10:28:04.860694 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-trusted-ca\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.860927 master-2 kubenswrapper[4776]: I1011 10:28:04.860899 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtkpx\" (UniqueName: \"kubernetes.io/projected/7e860f23-9dae-4606-9426-0edec38a332f-kube-api-access-xtkpx\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:04.860956 master-2 kubenswrapper[4776]: I1011 10:28:04.860940 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4536c84-d8f3-4808-bf8b-9b40695f46de-auth-proxy-config\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.861075 master-2 kubenswrapper[4776]: I1011 10:28:04.861055 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18ca0678-0b0d-4d5d-bc50-a0a098301f38-host-slash\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.861159 master-2 kubenswrapper[4776]: I1011 10:28:04.861138 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:04.861252 master-2 kubenswrapper[4776]: I1011 10:28:04.861230 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58aef476-6586-47bb-bf45-dbeccac6271a-kube-api-access\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:04.861358 master-2 kubenswrapper[4776]: I1011 10:28:04.861332 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b562963f-7112-411a-a64c-3b8eba909c59-trusted-ca\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.861393 master-2 kubenswrapper[4776]: I1011 10:28:04.861375 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6whh\" (UniqueName: \"kubernetes.io/projected/7652e0ca-2d18-48c7-80e0-f4a936038377-kube-api-access-t6whh\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.861490 master-2 kubenswrapper[4776]: I1011 10:28:04.861422 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwp57\" (UniqueName: \"kubernetes.io/projected/08b7d4e3-1682-4a3b-a757-84ded3a16764-kube-api-access-fwp57\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.861523 master-2 kubenswrapper[4776]: I1011 10:28:04.861500 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eba1e82e-9f3e-4273-836e-9407cc394b10-cco-trusted-ca\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:04.861523 master-2 kubenswrapper[4776]: I1011 10:28:04.861509 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:04.861814 master-2 kubenswrapper[4776]: I1011 10:28:04.861765 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b16a4f10-c724-43cf-acd4-b3f5aa575653-trusted-ca\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.861898 master-2 kubenswrapper[4776]: E1011 10:28:04.861784 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:04.861963 master-2 kubenswrapper[4776]: I1011 10:28:04.861902 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-serving-cert\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.861963 master-2 kubenswrapper[4776]: E1011 10:28:04.861928 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.361910466 +0000 UTC m=+120.146337175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:04.862024 master-2 kubenswrapper[4776]: I1011 10:28:04.861964 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08b7d4e3-1682-4a3b-a757-84ded3a16764-auth-proxy-config\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.862050 master-2 kubenswrapper[4776]: E1011 10:28:04.862018 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:04.862138 master-2 kubenswrapper[4776]: E1011 10:28:04.862110 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:04.862138 master-2 kubenswrapper[4776]: E1011 10:28:04.862129 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.362120362 +0000 UTC m=+120.146547071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:04.862201 master-2 kubenswrapper[4776]: E1011 10:28:04.862172 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.362157683 +0000 UTC m=+120.146584392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:04.862201 master-2 kubenswrapper[4776]: I1011 10:28:04.862110 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/59763d5b-237f-4095-bf52-86bb0154381c-snapshots\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.862251 master-2 kubenswrapper[4776]: I1011 10:28:04.862221 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e487f283-7482-463c-90b6-a812e00d0e35-serving-cert\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.862280 master-2 kubenswrapper[4776]: I1011 10:28:04.862263 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8050d30-444b-40a5-829c-1e3b788910a0-serving-cert\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.862319 master-2 kubenswrapper[4776]: I1011 10:28:04.862300 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-profile-collector-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.862363 master-2 kubenswrapper[4776]: I1011 10:28:04.862345 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-serving-cert\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.862423 master-2 kubenswrapper[4776]: I1011 10:28:04.862400 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59763d5b-237f-4095-bf52-86bb0154381c-serving-cert\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.862480 master-2 kubenswrapper[4776]: I1011 10:28:04.862450 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4536c84-d8f3-4808-bf8b-9b40695f46de-auth-proxy-config\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.862517 master-2 kubenswrapper[4776]: I1011 10:28:04.862444 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpxlw\" (UniqueName: \"kubernetes.io/projected/8757af56-20fb-439e-adba-7e4e50378936-kube-api-access-xpxlw\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:04.862557 master-2 kubenswrapper[4776]: I1011 10:28:04.862539 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w627\" (UniqueName: \"kubernetes.io/projected/88129ec6-6f99-42a1-842a-6a965c6b58fe-kube-api-access-4w627\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.862588 master-2 kubenswrapper[4776]: I1011 10:28:04.862575 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.862702 master-2 kubenswrapper[4776]: I1011 10:28:04.862595 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-trusted-ca\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.862702 master-2 kubenswrapper[4776]: I1011 10:28:04.862605 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.862702 master-2 kubenswrapper[4776]: I1011 10:28:04.862689 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxlns\" (UniqueName: \"kubernetes.io/projected/b16a4f10-c724-43cf-acd4-b3f5aa575653-kube-api-access-mxlns\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:04.862798 master-2 kubenswrapper[4776]: I1011 10:28:04.862706 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-operand-assets\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.862798 master-2 kubenswrapper[4776]: I1011 10:28:04.862724 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5wnj\" (UniqueName: \"kubernetes.io/projected/e4536c84-d8f3-4808-bf8b-9b40695f46de-kube-api-access-x5wnj\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:04.862853 master-2 kubenswrapper[4776]: I1011 10:28:04.862825 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-profile-collector-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.862921 master-2 kubenswrapper[4776]: I1011 10:28:04.862895 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.862957 master-2 kubenswrapper[4776]: I1011 10:28:04.862823 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08b7d4e3-1682-4a3b-a757-84ded3a16764-config\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:04.862984 master-2 kubenswrapper[4776]: I1011 10:28:04.862952 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-bound-sa-token\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:04.862984 master-2 kubenswrapper[4776]: I1011 10:28:04.862975 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8050d30-444b-40a5-829c-1e3b788910a0-config\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.863286 master-2 kubenswrapper[4776]: I1011 10:28:04.863259 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b562963f-7112-411a-a64c-3b8eba909c59-trusted-ca\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:04.863503 master-2 kubenswrapper[4776]: I1011 10:28:04.863480 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-ca\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.863797 master-2 kubenswrapper[4776]: I1011 10:28:04.863765 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e540333c-4b4d-439e-a82a-cd3a97c95a43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-56d4b95494-9fbb2\" (UID: \"e540333c-4b4d-439e-a82a-cd3a97c95a43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:04.863855 master-2 kubenswrapper[4776]: I1011 10:28:04.863789 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/59763d5b-237f-4095-bf52-86bb0154381c-snapshots\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.864046 master-2 kubenswrapper[4776]: I1011 10:28:04.863999 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-etcd-client\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.864263 master-2 kubenswrapper[4776]: E1011 10:28:04.864237 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:04.864318 master-2 kubenswrapper[4776]: I1011 10:28:04.864253 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88129ec6-6f99-42a1-842a-6a965c6b58fe-serving-cert\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:04.864318 master-2 kubenswrapper[4776]: E1011 10:28:04.864291 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.364279834 +0000 UTC m=+120.148706543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:04.864318 master-2 kubenswrapper[4776]: I1011 10:28:04.864289 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05cf2994-c049-4f42-b2d8-83b23e7e763a-serving-cert\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:04.864654 master-2 kubenswrapper[4776]: E1011 10:28:04.864561 4776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 11 10:28:04.864880 master-2 kubenswrapper[4776]: I1011 10:28:04.864713 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.864880 master-2 kubenswrapper[4776]: E1011 10:28:04.864738 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls podName:08b7d4e3-1682-4a3b-a757-84ded3a16764 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.364718757 +0000 UTC m=+120.149145466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls") pod "machine-approver-7876f99457-h7hhv" (UID: "08b7d4e3-1682-4a3b-a757-84ded3a16764") : secret "machine-approver-tls" not found Oct 11 10:28:04.864880 master-2 kubenswrapper[4776]: E1011 10:28:04.864805 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:04.864880 master-2 kubenswrapper[4776]: I1011 10:28:04.864846 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-service-ca-bundle\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.865021 master-2 kubenswrapper[4776]: I1011 10:28:04.864853 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-serving-cert\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:04.865192 master-2 kubenswrapper[4776]: I1011 10:28:04.865169 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-service-ca-bundle\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.865700 master-2 kubenswrapper[4776]: E1011 10:28:04.865635 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.36483781 +0000 UTC m=+120.149264569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:04.866510 master-2 kubenswrapper[4776]: I1011 10:28:04.865642 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6967590c-695e-4e20-964b-0c643abdf367-serving-cert\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:04.866549 master-2 kubenswrapper[4776]: I1011 10:28:04.866259 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-serving-cert\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:04.866648 master-2 kubenswrapper[4776]: I1011 10:28:04.866563 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-trusted-ca\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.867300 master-2 kubenswrapper[4776]: I1011 10:28:04.866589 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59763d5b-237f-4095-bf52-86bb0154381c-serving-cert\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:04.867300 master-2 kubenswrapper[4776]: I1011 10:28:04.867281 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-profile-collector-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:04.867300 master-2 kubenswrapper[4776]: I1011 10:28:04.867307 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.867445 master-2 kubenswrapper[4776]: I1011 10:28:04.867344 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-trusted-ca-bundle\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.867445 master-2 kubenswrapper[4776]: I1011 10:28:04.867363 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/548333d7-2374-4c38-b4fd-45c2bee2ac4e-images\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.867445 master-2 kubenswrapper[4776]: E1011 10:28:04.867366 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:04.867445 master-2 kubenswrapper[4776]: E1011 10:28:04.867403 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:05.367393844 +0000 UTC m=+120.151820543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:04.867544 master-2 kubenswrapper[4776]: I1011 10:28:04.867459 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8050d30-444b-40a5-829c-1e3b788910a0-serving-cert\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:04.867632 master-2 kubenswrapper[4776]: I1011 10:28:04.867608 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:04.867892 master-2 kubenswrapper[4776]: I1011 10:28:04.867855 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d362fb9-48e4-4d72-a940-ec6c9c051fac-serving-cert\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:04.868097 master-2 kubenswrapper[4776]: I1011 10:28:04.868064 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-trusted-ca\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:04.868097 master-2 kubenswrapper[4776]: I1011 10:28:04.868084 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/548333d7-2374-4c38-b4fd-45c2bee2ac4e-images\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:04.868166 master-2 kubenswrapper[4776]: I1011 10:28:04.868124 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-profile-collector-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:04.868325 master-2 kubenswrapper[4776]: I1011 10:28:04.868294 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e487f283-7482-463c-90b6-a812e00d0e35-serving-cert\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:04.869992 master-2 kubenswrapper[4776]: I1011 10:28:04.869948 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7004f3ff-6db8-446d-94c1-1223e975299d-trusted-ca-bundle\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:04.879584 master-2 kubenswrapper[4776]: I1011 10:28:04.879547 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 11 10:28:04.899332 master-2 kubenswrapper[4776]: I1011 10:28:04.899303 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 11 10:28:04.919727 master-2 kubenswrapper[4776]: I1011 10:28:04.919669 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 11 10:28:04.957249 master-2 kubenswrapper[4776]: I1011 10:28:04.957183 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc6zm\" (UniqueName: \"kubernetes.io/projected/66dee5be-e631-462d-8a2c-51a2031a83a2-kube-api-access-gc6zm\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:04.968293 master-2 kubenswrapper[4776]: I1011 10:28:04.968252 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lznwk\" (UniqueName: \"kubernetes.io/projected/18ca0678-0b0d-4d5d-bc50-a0a098301f38-kube-api-access-lznwk\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.968376 master-2 kubenswrapper[4776]: I1011 10:28:04.968304 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/18ca0678-0b0d-4d5d-bc50-a0a098301f38-iptables-alerter-script\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.968413 master-2 kubenswrapper[4776]: I1011 10:28:04.968390 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18ca0678-0b0d-4d5d-bc50-a0a098301f38-host-slash\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.968554 master-2 kubenswrapper[4776]: I1011 10:28:04.968525 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18ca0678-0b0d-4d5d-bc50-a0a098301f38-host-slash\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.969128 master-2 kubenswrapper[4776]: I1011 10:28:04.969085 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/18ca0678-0b0d-4d5d-bc50-a0a098301f38-iptables-alerter-script\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:04.975031 master-2 kubenswrapper[4776]: I1011 10:28:04.974998 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dptq\" (UniqueName: \"kubernetes.io/projected/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-kube-api-access-2dptq\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:04.994184 master-2 kubenswrapper[4776]: I1011 10:28:04.993999 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw9qb\" (UniqueName: \"kubernetes.io/projected/e20ebc39-150b-472a-bb22-328d8f5db87b-kube-api-access-pw9qb\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:05.012937 master-2 kubenswrapper[4776]: I1011 10:28:05.012877 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlz8s\" (UniqueName: \"kubernetes.io/projected/d4354488-1b32-422d-bb06-767a952192a5-kube-api-access-tlz8s\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:05.042251 master-2 kubenswrapper[4776]: I1011 10:28:05.042198 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e487f283-7482-463c-90b6-a812e00d0e35-kube-api-access\") pod \"kube-controller-manager-operator-5d85974df9-5gj77\" (UID: \"e487f283-7482-463c-90b6-a812e00d0e35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:05.057916 master-2 kubenswrapper[4776]: I1011 10:28:05.057874 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:28:05.058040 master-2 kubenswrapper[4776]: I1011 10:28:05.057972 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:05.065162 master-2 kubenswrapper[4776]: I1011 10:28:05.065126 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29krx\" (UniqueName: \"kubernetes.io/projected/eba1e82e-9f3e-4273-836e-9407cc394b10-kube-api-access-29krx\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:05.084031 master-2 kubenswrapper[4776]: I1011 10:28:05.083964 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qxt8\" (UniqueName: \"kubernetes.io/projected/e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc-kube-api-access-7qxt8\") pod \"cluster-olm-operator-77b56b6f4f-dczh4\" (UID: \"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:05.090300 master-2 kubenswrapper[4776]: I1011 10:28:05.090239 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" Oct 11 10:28:05.093533 master-2 kubenswrapper[4776]: I1011 10:28:05.093477 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjs47\" (UniqueName: \"kubernetes.io/projected/9d362fb9-48e4-4d72-a940-ec6c9c051fac-kube-api-access-hjs47\") pod \"openshift-config-operator-55957b47d5-f7vv7\" (UID: \"9d362fb9-48e4-4d72-a940-ec6c9c051fac\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:05.114635 master-2 kubenswrapper[4776]: I1011 10:28:05.114571 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw2ll\" (UniqueName: \"kubernetes.io/projected/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-kube-api-access-xw2ll\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:05.135058 master-2 kubenswrapper[4776]: I1011 10:28:05.134994 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" Oct 11 10:28:05.144611 master-2 kubenswrapper[4776]: I1011 10:28:05.144585 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b562963f-7112-411a-a64c-3b8eba909c59-bound-sa-token\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:05.158446 master-2 kubenswrapper[4776]: I1011 10:28:05.158411 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plpwg\" (UniqueName: \"kubernetes.io/projected/64310b0b-bae1-4ad3-b106-6d59d47d29b2-kube-api-access-plpwg\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:05.186598 master-2 kubenswrapper[4776]: I1011 10:28:05.186578 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcdgf\" (UniqueName: \"kubernetes.io/projected/f8050d30-444b-40a5-829c-1e3b788910a0-kube-api-access-rcdgf\") pod \"openshift-apiserver-operator-7d88655794-7jd4q\" (UID: \"f8050d30-444b-40a5-829c-1e3b788910a0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:05.204529 master-2 kubenswrapper[4776]: I1011 10:28:05.204427 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5q65\" (UniqueName: \"kubernetes.io/projected/a0b806b9-13ff-45fa-afba-5d0c89eac7df-kube-api-access-g5q65\") pod \"csi-snapshot-controller-operator-7ff96dd767-vv9w8\" (UID: \"a0b806b9-13ff-45fa-afba-5d0c89eac7df\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" Oct 11 10:28:05.229900 master-2 kubenswrapper[4776]: I1011 10:28:05.229833 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxt2q\" (UniqueName: \"kubernetes.io/projected/a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1-kube-api-access-mxt2q\") pod \"etcd-operator-6bddf7d79-8wc54\" (UID: \"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:05.237497 master-2 kubenswrapper[4776]: I1011 10:28:05.237420 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb54g\" (UniqueName: \"kubernetes.io/projected/dbaa6ca7-9865-42f6-8030-2decf702caa1-kube-api-access-vb54g\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:05.257871 master-2 kubenswrapper[4776]: I1011 10:28:05.257823 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6967590c-695e-4e20-964b-0c643abdf367-kube-api-access\") pod \"kube-apiserver-operator-68f5d95b74-9h5mv\" (UID: \"6967590c-695e-4e20-964b-0c643abdf367\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: I1011 10:28:05.274741 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: I1011 10:28:05.274788 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: I1011 10:28:05.274855 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: E1011 10:28:05.274985 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: E1011 10:28:05.275026 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert podName:6b8dc5b8-3c48-4dba-9992-6e269ca133f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.275013592 +0000 UTC m=+121.059440301 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert") pod "cluster-autoscaler-operator-7ff449c7c5-cfvjb" (UID: "6b8dc5b8-3c48-4dba-9992-6e269ca133f1") : secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: E1011 10:28:05.275279 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: E1011 10:28:05.275301 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.27529341 +0000 UTC m=+121.059720119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: E1011 10:28:05.275330 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: E1011 10:28:05.275346 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.275341811 +0000 UTC m=+121.059768520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:05.285335 master-2 kubenswrapper[4776]: I1011 10:28:05.280263 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmd5r\" (UniqueName: \"kubernetes.io/projected/b562963f-7112-411a-a64c-3b8eba909c59-kube-api-access-rmd5r\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:05.316595 master-2 kubenswrapper[4776]: I1011 10:28:05.316519 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77"] Oct 11 10:28:05.317848 master-2 kubenswrapper[4776]: I1011 10:28:05.317640 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vc8b\" (UniqueName: \"kubernetes.io/projected/e540333c-4b4d-439e-a82a-cd3a97c95a43-kube-api-access-2vc8b\") pod \"cluster-storage-operator-56d4b95494-9fbb2\" (UID: \"e540333c-4b4d-439e-a82a-cd3a97c95a43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:05.319164 master-2 kubenswrapper[4776]: I1011 10:28:05.319117 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmwgj\" (UniqueName: \"kubernetes.io/projected/89e02bcb-b3fe-4a45-a531-4ab41d8ee424-kube-api-access-xmwgj\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ww4zz\" (UID: \"89e02bcb-b3fe-4a45-a531-4ab41d8ee424\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:05.329927 master-2 kubenswrapper[4776]: W1011 10:28:05.329883 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode487f283_7482_463c_90b6_a812e00d0e35.slice/crio-682473197d1a6765d5b8eb8302055ac7da19710922e0edf3c0354b34d4fa6a07 WatchSource:0}: Error finding container 682473197d1a6765d5b8eb8302055ac7da19710922e0edf3c0354b34d4fa6a07: Status 404 returned error can't find the container with id 682473197d1a6765d5b8eb8302055ac7da19710922e0edf3c0354b34d4fa6a07 Oct 11 10:28:05.334311 master-2 kubenswrapper[4776]: I1011 10:28:05.334276 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:05.339085 master-2 kubenswrapper[4776]: I1011 10:28:05.339051 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4"] Oct 11 10:28:05.341243 master-2 kubenswrapper[4776]: I1011 10:28:05.341198 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f54r2\" (UniqueName: \"kubernetes.io/projected/05cf2994-c049-4f42-b2d8-83b23e7e763a-kube-api-access-f54r2\") pod \"service-ca-operator-568c655666-84cp8\" (UID: \"05cf2994-c049-4f42-b2d8-83b23e7e763a\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:05.344213 master-2 kubenswrapper[4776]: W1011 10:28:05.344170 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6e70e9c_b1bd_4f28_911c_fc6ecfd2e8fc.slice/crio-af6eef4adea766bbdc1f8cd4dbe21919a4b8dfead251251d760b6ec0c39cd78c WatchSource:0}: Error finding container af6eef4adea766bbdc1f8cd4dbe21919a4b8dfead251251d760b6ec0c39cd78c: Status 404 returned error can't find the container with id af6eef4adea766bbdc1f8cd4dbe21919a4b8dfead251251d760b6ec0c39cd78c Oct 11 10:28:05.353649 master-2 kubenswrapper[4776]: I1011 10:28:05.353612 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" Oct 11 10:28:05.359604 master-2 kubenswrapper[4776]: I1011 10:28:05.359565 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b7hm\" (UniqueName: \"kubernetes.io/projected/cbf33a7e-abea-411d-9a19-85cfe67debe3-kube-api-access-9b7hm\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:05.359745 master-2 kubenswrapper[4776]: I1011 10:28:05.359661 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" Oct 11 10:28:05.372653 master-2 kubenswrapper[4776]: I1011 10:28:05.372601 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" Oct 11 10:28:05.375957 master-2 kubenswrapper[4776]: I1011 10:28:05.375894 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:05.376035 master-2 kubenswrapper[4776]: I1011 10:28:05.375981 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:05.376077 master-2 kubenswrapper[4776]: E1011 10:28:05.376037 4776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 11 10:28:05.376128 master-2 kubenswrapper[4776]: E1011 10:28:05.376095 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls podName:08b7d4e3-1682-4a3b-a757-84ded3a16764 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.376076452 +0000 UTC m=+121.160503161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls") pod "machine-approver-7876f99457-h7hhv" (UID: "08b7d4e3-1682-4a3b-a757-84ded3a16764") : secret "machine-approver-tls" not found Oct 11 10:28:05.376232 master-2 kubenswrapper[4776]: E1011 10:28:05.376190 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:05.376315 master-2 kubenswrapper[4776]: E1011 10:28:05.376284 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.376258157 +0000 UTC m=+121.160684916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:05.376420 master-2 kubenswrapper[4776]: E1011 10:28:05.376388 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:05.376462 master-2 kubenswrapper[4776]: E1011 10:28:05.376450 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.376432022 +0000 UTC m=+121.160858821 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:05.376513 master-2 kubenswrapper[4776]: I1011 10:28:05.376039 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:05.376559 master-2 kubenswrapper[4776]: I1011 10:28:05.376521 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:05.376605 master-2 kubenswrapper[4776]: I1011 10:28:05.376575 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:05.376689 master-2 kubenswrapper[4776]: I1011 10:28:05.376639 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:05.376766 master-2 kubenswrapper[4776]: I1011 10:28:05.376737 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:05.376817 master-2 kubenswrapper[4776]: I1011 10:28:05.376797 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:05.376876 master-2 kubenswrapper[4776]: I1011 10:28:05.376848 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:05.376970 master-2 kubenswrapper[4776]: I1011 10:28:05.376940 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwznd\" (UniqueName: \"kubernetes.io/projected/59763d5b-237f-4095-bf52-86bb0154381c-kube-api-access-hwznd\") pod \"insights-operator-7dcf5bd85b-6c2rl\" (UID: \"59763d5b-237f-4095-bf52-86bb0154381c\") " pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:05.377011 master-2 kubenswrapper[4776]: E1011 10:28:05.376961 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 11 10:28:05.377011 master-2 kubenswrapper[4776]: I1011 10:28:05.376966 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:05.377091 master-2 kubenswrapper[4776]: E1011 10:28:05.377040 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377019899 +0000 UTC m=+121.161446648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "node-tuning-operator-tls" not found Oct 11 10:28:05.377091 master-2 kubenswrapper[4776]: E1011 10:28:05.377063 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:05.377168 master-2 kubenswrapper[4776]: I1011 10:28:05.377089 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:05.377168 master-2 kubenswrapper[4776]: E1011 10:28:05.377110 4776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:05.377168 master-2 kubenswrapper[4776]: E1011 10:28:05.377124 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377102431 +0000 UTC m=+121.161529190 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:05.377168 master-2 kubenswrapper[4776]: E1011 10:28:05.377155 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert podName:eba1e82e-9f3e-4273-836e-9407cc394b10 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377137312 +0000 UTC m=+121.161564021 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-8d7xr" (UID: "eba1e82e-9f3e-4273-836e-9407cc394b10") : secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: I1011 10:28:05.377185 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377212 4776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: I1011 10:28:05.377220 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377265 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls podName:893af718-1fec-4b8b-8349-d85f978f4140 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377251166 +0000 UTC m=+121.161677905 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls") pod "dns-operator-7769d9677-wh775" (UID: "893af718-1fec-4b8b-8349-d85f978f4140") : secret "metrics-tls" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377290 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377301 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377213 4776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377341 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377070 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377309 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377303587 +0000 UTC m=+121.161730296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: I1011 10:28:05.377410 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:05.377407 master-2 kubenswrapper[4776]: E1011 10:28:05.377415 4776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377468 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls podName:6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377450441 +0000 UTC m=+121.161877310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls") pod "ingress-operator-766ddf4575-wf7mj" (UID: "6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c") : secret "metrics-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377498 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls podName:b562963f-7112-411a-a64c-3b8eba909c59 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377482352 +0000 UTC m=+121.161909211 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-mwbsr" (UID: "b562963f-7112-411a-a64c-3b8eba909c59") : secret "image-registry-operator-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377532 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377513753 +0000 UTC m=+121.161940602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377532 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377561 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377546404 +0000 UTC m=+121.161973283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377588 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377574625 +0000 UTC m=+121.162001374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: I1011 10:28:05.377633 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377706 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377660587 +0000 UTC m=+121.162087296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377746 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377785 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.37777313 +0000 UTC m=+121.162199869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: I1011 10:28:05.377784 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:05.377844 master-2 kubenswrapper[4776]: E1011 10:28:05.377832 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:05.378303 master-2 kubenswrapper[4776]: I1011 10:28:05.377837 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:05.378303 master-2 kubenswrapper[4776]: E1011 10:28:05.377857 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377850883 +0000 UTC m=+121.162277592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:05.378303 master-2 kubenswrapper[4776]: E1011 10:28:05.377913 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:05.378303 master-2 kubenswrapper[4776]: E1011 10:28:05.377970 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:06.377950805 +0000 UTC m=+121.162377634 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:05.403368 master-2 kubenswrapper[4776]: I1011 10:28:05.403302 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" Oct 11 10:28:05.405961 master-2 kubenswrapper[4776]: I1011 10:28:05.405913 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfh8n\" (UniqueName: \"kubernetes.io/projected/548333d7-2374-4c38-b4fd-45c2bee2ac4e-kube-api-access-dfh8n\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:05.416783 master-2 kubenswrapper[4776]: I1011 10:28:05.416731 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" Oct 11 10:28:05.418440 master-2 kubenswrapper[4776]: I1011 10:28:05.418407 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f979h\" (UniqueName: \"kubernetes.io/projected/e3281eb7-fb96-4bae-8c55-b79728d426b0-kube-api-access-f979h\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:05.443752 master-2 kubenswrapper[4776]: I1011 10:28:05.443114 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" Oct 11 10:28:05.449733 master-2 kubenswrapper[4776]: I1011 10:28:05.448401 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bzgn\" (UniqueName: \"kubernetes.io/projected/7004f3ff-6db8-446d-94c1-1223e975299d-kube-api-access-8bzgn\") pod \"authentication-operator-66df44bc95-kxhjc\" (UID: \"7004f3ff-6db8-446d-94c1-1223e975299d\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:05.449733 master-2 kubenswrapper[4776]: I1011 10:28:05.448994 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" Oct 11 10:28:05.453522 master-2 kubenswrapper[4776]: I1011 10:28:05.453485 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" event={"ID":"e487f283-7482-463c-90b6-a812e00d0e35","Type":"ContainerStarted","Data":"682473197d1a6765d5b8eb8302055ac7da19710922e0edf3c0354b34d4fa6a07"} Oct 11 10:28:05.454621 master-2 kubenswrapper[4776]: I1011 10:28:05.454591 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerStarted","Data":"af6eef4adea766bbdc1f8cd4dbe21919a4b8dfead251251d760b6ec0c39cd78c"} Oct 11 10:28:05.481107 master-2 kubenswrapper[4776]: I1011 10:28:05.480833 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58aef476-6586-47bb-bf45-dbeccac6271a-kube-api-access\") pod \"openshift-kube-scheduler-operator-766d6b44f6-s5shc\" (UID: \"58aef476-6586-47bb-bf45-dbeccac6271a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:05.499159 master-2 kubenswrapper[4776]: I1011 10:28:05.499115 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6whh\" (UniqueName: \"kubernetes.io/projected/7652e0ca-2d18-48c7-80e0-f4a936038377-kube-api-access-t6whh\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:05.519348 master-2 kubenswrapper[4776]: I1011 10:28:05.519257 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtkpx\" (UniqueName: \"kubernetes.io/projected/7e860f23-9dae-4606-9426-0edec38a332f-kube-api-access-xtkpx\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:05.541046 master-2 kubenswrapper[4776]: I1011 10:28:05.538940 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwp57\" (UniqueName: \"kubernetes.io/projected/08b7d4e3-1682-4a3b-a757-84ded3a16764-kube-api-access-fwp57\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:05.557466 master-2 kubenswrapper[4776]: I1011 10:28:05.557424 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7"] Oct 11 10:28:05.558525 master-2 kubenswrapper[4776]: I1011 10:28:05.558494 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" Oct 11 10:28:05.565936 master-2 kubenswrapper[4776]: I1011 10:28:05.563194 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krh5m\" (UniqueName: \"kubernetes.io/projected/893af718-1fec-4b8b-8349-d85f978f4140-kube-api-access-krh5m\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:05.573492 master-2 kubenswrapper[4776]: I1011 10:28:05.573447 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8"] Oct 11 10:28:05.578954 master-2 kubenswrapper[4776]: I1011 10:28:05.578914 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-568c655666-84cp8"] Oct 11 10:28:05.587261 master-2 kubenswrapper[4776]: I1011 10:28:05.587113 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpxlw\" (UniqueName: \"kubernetes.io/projected/8757af56-20fb-439e-adba-7e4e50378936-kube-api-access-xpxlw\") pod \"assisted-installer-controller-v6dfc\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:05.595832 master-2 kubenswrapper[4776]: I1011 10:28:05.595792 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q"] Oct 11 10:28:05.603905 master-2 kubenswrapper[4776]: I1011 10:28:05.601183 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w627\" (UniqueName: \"kubernetes.io/projected/88129ec6-6f99-42a1-842a-6a965c6b58fe-kube-api-access-4w627\") pod \"openshift-controller-manager-operator-5745565d84-bq4rs\" (UID: \"88129ec6-6f99-42a1-842a-6a965c6b58fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:05.604612 master-2 kubenswrapper[4776]: W1011 10:28:05.604526 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8050d30_444b_40a5_829c_1e3b788910a0.slice/crio-a641dde88cb0eb910af8600e4b9aa4f67ea86f8b0a5f29ef7f742d66d5b7eb69 WatchSource:0}: Error finding container a641dde88cb0eb910af8600e4b9aa4f67ea86f8b0a5f29ef7f742d66d5b7eb69: Status 404 returned error can't find the container with id a641dde88cb0eb910af8600e4b9aa4f67ea86f8b0a5f29ef7f742d66d5b7eb69 Oct 11 10:28:05.611965 master-2 kubenswrapper[4776]: I1011 10:28:05.611910 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv"] Oct 11 10:28:05.614460 master-2 kubenswrapper[4776]: I1011 10:28:05.614405 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" Oct 11 10:28:05.617990 master-2 kubenswrapper[4776]: W1011 10:28:05.617958 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6967590c_695e_4e20_964b_0c643abdf367.slice/crio-18c2d67486e169d09512b65a5f4a23491bdfb755bad6884758580671b299c356 WatchSource:0}: Error finding container 18c2d67486e169d09512b65a5f4a23491bdfb755bad6884758580671b299c356: Status 404 returned error can't find the container with id 18c2d67486e169d09512b65a5f4a23491bdfb755bad6884758580671b299c356 Oct 11 10:28:05.619057 master-2 kubenswrapper[4776]: I1011 10:28:05.618958 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxlns\" (UniqueName: \"kubernetes.io/projected/b16a4f10-c724-43cf-acd4-b3f5aa575653-kube-api-access-mxlns\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:05.632590 master-2 kubenswrapper[4776]: I1011 10:28:05.632063 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54"] Oct 11 10:28:05.635130 master-2 kubenswrapper[4776]: W1011 10:28:05.635086 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda35883b8_6cf5_45d7_a4e3_02c0ac0d91e1.slice/crio-ef035f600c6de6398c2351b00cea47d45cdc23afa3b46d4c5caee020d9ff82b6 WatchSource:0}: Error finding container ef035f600c6de6398c2351b00cea47d45cdc23afa3b46d4c5caee020d9ff82b6: Status 404 returned error can't find the container with id ef035f600c6de6398c2351b00cea47d45cdc23afa3b46d4c5caee020d9ff82b6 Oct 11 10:28:05.637179 master-2 kubenswrapper[4776]: I1011 10:28:05.637139 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5wnj\" (UniqueName: \"kubernetes.io/projected/e4536c84-d8f3-4808-bf8b-9b40695f46de-kube-api-access-x5wnj\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:05.643885 master-2 kubenswrapper[4776]: I1011 10:28:05.641892 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" Oct 11 10:28:05.654105 master-2 kubenswrapper[4776]: I1011 10:28:05.654063 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz"] Oct 11 10:28:05.662342 master-2 kubenswrapper[4776]: I1011 10:28:05.662307 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-bound-sa-token\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:05.679755 master-2 kubenswrapper[4776]: I1011 10:28:05.679729 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lznwk\" (UniqueName: \"kubernetes.io/projected/18ca0678-0b0d-4d5d-bc50-a0a098301f38-kube-api-access-lznwk\") pod \"iptables-alerter-5mn8b\" (UID: \"18ca0678-0b0d-4d5d-bc50-a0a098301f38\") " pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:05.679827 master-2 kubenswrapper[4776]: W1011 10:28:05.679725 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89e02bcb_b3fe_4a45_a531_4ab41d8ee424.slice/crio-f99465d5fca50b4458a90201f8a003a63aa0a3926c975edbb7c2e5699790ba29 WatchSource:0}: Error finding container f99465d5fca50b4458a90201f8a003a63aa0a3926c975edbb7c2e5699790ba29: Status 404 returned error can't find the container with id f99465d5fca50b4458a90201f8a003a63aa0a3926c975edbb7c2e5699790ba29 Oct 11 10:28:05.680703 master-2 kubenswrapper[4776]: I1011 10:28:05.680657 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 11 10:28:05.700317 master-2 kubenswrapper[4776]: I1011 10:28:05.700157 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 11 10:28:05.710355 master-2 kubenswrapper[4776]: I1011 10:28:05.710317 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" Oct 11 10:28:05.720749 master-2 kubenswrapper[4776]: I1011 10:28:05.720714 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 11 10:28:05.722325 master-2 kubenswrapper[4776]: I1011 10:28:05.722288 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc"] Oct 11 10:28:05.772777 master-2 kubenswrapper[4776]: I1011 10:28:05.772552 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:28:05.782206 master-2 kubenswrapper[4776]: I1011 10:28:05.782033 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc"] Oct 11 10:28:05.790598 master-2 kubenswrapper[4776]: W1011 10:28:05.790539 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8757af56_20fb_439e_adba_7e4e50378936.slice/crio-a75184c2ae2e40b8111ddce83c4348030deaf04dbe5bc245f6d525047856b81f WatchSource:0}: Error finding container a75184c2ae2e40b8111ddce83c4348030deaf04dbe5bc245f6d525047856b81f: Status 404 returned error can't find the container with id a75184c2ae2e40b8111ddce83c4348030deaf04dbe5bc245f6d525047856b81f Oct 11 10:28:05.798259 master-2 kubenswrapper[4776]: W1011 10:28:05.798000 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58aef476_6586_47bb_bf45_dbeccac6271a.slice/crio-ac2feb73ac3d44ff07af99870af8e846b44254a04b4b3bda7aa0d54ef49c052c WatchSource:0}: Error finding container ac2feb73ac3d44ff07af99870af8e846b44254a04b4b3bda7aa0d54ef49c052c: Status 404 returned error can't find the container with id ac2feb73ac3d44ff07af99870af8e846b44254a04b4b3bda7aa0d54ef49c052c Oct 11 10:28:05.814746 master-2 kubenswrapper[4776]: I1011 10:28:05.814707 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2"] Oct 11 10:28:05.818235 master-2 kubenswrapper[4776]: I1011 10:28:05.818179 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5mn8b" Oct 11 10:28:05.822469 master-2 kubenswrapper[4776]: E1011 10:28:05.822314 4776 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cluster-storage-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32,Command:[cluster-storage-operator start],Args:[start -v=2 --terminate-on-files=/var/run/secrets/serving-cert/tls.crt --terminate-on-files=/var/run/secrets/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.25,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.25,ValueFrom:nil,},EnvVar{Name:AWS_EBS_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf90c10ec9d9171d5bd25b66abd13d8b5b9d2b6d760915c2340267349dd52b30,ValueFrom:nil,},EnvVar{Name:AWS_EBS_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de96b1f387e4519341ed1c1716ce281855ff8cdb3c16ef5b2679cdc9f7750ced,ValueFrom:nil,},EnvVar{Name:GCP_PD_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d43bfe6638266a6703a47d5be6c2452bd2d8cc3acf29cbf3888849124b4869,ValueFrom:nil,},EnvVar{Name:GCP_PD_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2a28202489fb0a4ba57fdec457474da3dd3cecf14755740b3cf67928b4ee939a,ValueFrom:nil,},EnvVar{Name:OPENSTACK_CINDER_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b7152d9afe01f6a4478bc99b44325fe5a9490009fd26e805c12a32c5494a6c56,ValueFrom:nil,},EnvVar{Name:OPENSTACK_CINDER_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c7b6f05c6c4268a757c602999ab17f19d4c736be8fb245e16edcc2299408a599,ValueFrom:nil,},EnvVar{Name:OVIRT_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d796f981ddcc15d0e399f72991eef54933ac737c38323f66a4f4b5f2719c836,ValueFrom:nil,},EnvVar{Name:OVIRT_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56269a150f4fa9a1befa2322cbff6987fab9d057c47ce9e22e349c57ed9ada5,ValueFrom:nil,},EnvVar{Name:MANILA_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:163e4a4b11c0e88deac21852e7faecb57330d887d4022a4a205b3b426b9d8ab8,ValueFrom:nil,},EnvVar{Name:MANILA_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6cee6f9e952daa541bb07a0d913da6c0b910526d679bc6e57f22b253651e933,ValueFrom:nil,},EnvVar{Name:MANILA_NFS_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:875ca6816a8aa81b837afffa42cc10710abe9c94edd7e90cfef0723aa9a9c3a9,ValueFrom:nil,},EnvVar{Name:PROVISIONER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:259bcf080eb040af978ae5cb6b9ecdb23cb30070a46dc2e9eaad8f39dd0ea3b4,ValueFrom:nil,},EnvVar{Name:ATTACHER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d9a92fcbd55d858a29d71a1c3de84e9e54fcda14b133d77927dfa5e481cd26c,ValueFrom:nil,},EnvVar{Name:RESIZER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9631ca611a761ba23809c6f0b4286d3507df81057ebf7c3a316a780dd3a238f5,ValueFrom:nil,},EnvVar{Name:SNAPSHOTTER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32ba38ea67c3cc901f2168fd8741d04c38d41eebe99a13aab0e308f7a3d19e2d,ValueFrom:nil,},EnvVar{Name:NODE_DRIVER_REGISTRAR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:02a63915677f55be53239c2d39112a24c8fb76569418b4cf143784d7d4978a98,ValueFrom:nil,},EnvVar{Name:LIVENESS_PROBE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:14f4c10817737e35e427b12960465d4925250a3216e20cd9c8703e384d82072a,ValueFrom:nil,},EnvVar{Name:VSPHERE_PROBLEM_DETECTOR_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a18b9f542d7e3c308e85f1ddd9ab16f597fa8bd8179fae50ebce6e76c368bae,ValueFrom:nil,},EnvVar{Name:AZURE_DISK_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b6ee9244784011ccc9b003e3fef287e1e8fe841ee84cefff3064e627a8bc102,ValueFrom:nil,},EnvVar{Name:AZURE_DISK_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:580436c37851ccb8511ad0bd03f4e975e899cbafa4ab82e8285a0e7a968a94be,ValueFrom:nil,},EnvVar{Name:AZURE_FILE_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cf7fe3e89df66f3f14669abc5000a0c57f8a8108fbf2fdfdabd5a2385fa1183,ValueFrom:nil,},EnvVar{Name:AZURE_FILE_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:38eba09d0099585d4d4318e444a2ad827087b415232b3ae5351741422bcea2fc,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169,ValueFrom:nil,},EnvVar{Name:VMWARE_VSPHERE_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c349bac6253b33ef299566d78e22aea174c8431cae2591d56ba22d389c01bc5,ValueFrom:nil,},EnvVar{Name:VMWARE_VSPHERE_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:601a001f8e7ac290cab0eede5fff7fbd23100bc92c2e8068c7e1dfa85cbc8c00,ValueFrom:nil,},EnvVar{Name:VMWARE_VSPHERE_SYNCER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3dd0132eff8900273dbfe86c5be49afd8101cefcde67bdc4ad586b02a8caf342,ValueFrom:nil,},EnvVar{Name:CLUSTER_CLOUD_CONTROLLER_MANAGER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:201e8fc1896dadc01ce68cec4c7437f12ddc3ac35792cc4d193242b5c41f48e1,ValueFrom:nil,},EnvVar{Name:IBM_VPC_BLOCK_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f661f985884d888372169daa7122c637b35de7f146de29a1ecc3e45007d2a0d5,ValueFrom:nil,},EnvVar{Name:IBM_VPC_BLOCK_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72a42f061ebbf86c63dc5750d1a4d9292299fb986837cb148214de1afbc3e5d4,ValueFrom:nil,},EnvVar{Name:POWERVS_BLOCK_CSI_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:46be8d78844b327070148dc5381af89dda5c2e3994b93e7b7e82bdec70e8916d,ValueFrom:nil,},EnvVar{Name:POWERVS_BLOCK_CSI_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53cfbbddd9b4b73dc46c7c16b4b01211e2d04f2ddad16607baf5c7e08e3c9190,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2f612a166859beeffd90c78a8dfe0dc0721ffe5e0bc9b7a6d1ee155e0a39830,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cluster-storage-operator-serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vc8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-storage-operator-56d4b95494-9fbb2_openshift-cluster-storage-operator(e540333c-4b4d-439e-a82a-cd3a97c95a43): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 10:28:05.823853 master-2 kubenswrapper[4776]: E1011 10:28:05.823821 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" podUID="e540333c-4b4d-439e-a82a-cd3a97c95a43" Oct 11 10:28:05.830001 master-2 kubenswrapper[4776]: W1011 10:28:05.829955 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18ca0678_0b0d_4d5d_bc50_a0a098301f38.slice/crio-aff4fc2bae63d39167bfd5b06973cb1e2c2eed3757eaedf7fcff7ce5f143d716 WatchSource:0}: Error finding container aff4fc2bae63d39167bfd5b06973cb1e2c2eed3757eaedf7fcff7ce5f143d716: Status 404 returned error can't find the container with id aff4fc2bae63d39167bfd5b06973cb1e2c2eed3757eaedf7fcff7ce5f143d716 Oct 11 10:28:05.832519 master-2 kubenswrapper[4776]: E1011 10:28:05.832473 4776 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lznwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5mn8b_openshift-network-operator(18ca0678-0b0d-4d5d-bc50-a0a098301f38): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 10:28:05.833849 master-2 kubenswrapper[4776]: E1011 10:28:05.833807 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-network-operator/iptables-alerter-5mn8b" podUID="18ca0678-0b0d-4d5d-bc50-a0a098301f38" Oct 11 10:28:05.844004 master-2 kubenswrapper[4776]: I1011 10:28:05.843966 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-7dcf5bd85b-6c2rl"] Oct 11 10:28:05.850526 master-2 kubenswrapper[4776]: W1011 10:28:05.850473 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59763d5b_237f_4095_bf52_86bb0154381c.slice/crio-81917bbf0fa4d12b89f9108c22d49466b9b11f765e577bb4761536b9fd8d7328 WatchSource:0}: Error finding container 81917bbf0fa4d12b89f9108c22d49466b9b11f765e577bb4761536b9fd8d7328: Status 404 returned error can't find the container with id 81917bbf0fa4d12b89f9108c22d49466b9b11f765e577bb4761536b9fd8d7328 Oct 11 10:28:05.854113 master-2 kubenswrapper[4776]: E1011 10:28:05.854049 4776 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:insights-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce,Command:[],Args:[start --config=/etc/insights-operator/server.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELEASE_VERSION,Value:4.18.25,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{56623104 0} {} 54Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:snapshots,ReadOnly:false,MountPath:/var/lib/insights-operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/service-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwznd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000270000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod insights-operator-7dcf5bd85b-6c2rl_openshift-insights(59763d5b-237f-4095-bf52-86bb0154381c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 10:28:05.855266 master-2 kubenswrapper[4776]: E1011 10:28:05.855228 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" podUID="59763d5b-237f-4095-bf52-86bb0154381c" Oct 11 10:28:05.887079 master-2 kubenswrapper[4776]: I1011 10:28:05.887002 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs"] Oct 11 10:28:05.893692 master-2 kubenswrapper[4776]: W1011 10:28:05.893642 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88129ec6_6f99_42a1_842a_6a965c6b58fe.slice/crio-effc2411f0b3e10834a868d9bfd6f868e3e9f1606e9d6043cd6d8654e3630f38 WatchSource:0}: Error finding container effc2411f0b3e10834a868d9bfd6f868e3e9f1606e9d6043cd6d8654e3630f38: Status 404 returned error can't find the container with id effc2411f0b3e10834a868d9bfd6f868e3e9f1606e9d6043cd6d8654e3630f38 Oct 11 10:28:05.895933 master-2 kubenswrapper[4776]: E1011 10:28:05.895862 4776 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.25,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.25,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4w627,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-5745565d84-bq4rs_openshift-controller-manager-operator(88129ec6-6f99-42a1-842a-6a965c6b58fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 10:28:05.897293 master-2 kubenswrapper[4776]: E1011 10:28:05.897247 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" podUID="88129ec6-6f99-42a1-842a-6a965c6b58fe" Oct 11 10:28:06.286483 master-2 kubenswrapper[4776]: I1011 10:28:06.286420 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:06.286483 master-2 kubenswrapper[4776]: I1011 10:28:06.286466 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:06.286798 master-2 kubenswrapper[4776]: I1011 10:28:06.286502 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:06.286798 master-2 kubenswrapper[4776]: E1011 10:28:06.286703 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:06.286798 master-2 kubenswrapper[4776]: E1011 10:28:06.286747 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert podName:6b8dc5b8-3c48-4dba-9992-6e269ca133f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.286733415 +0000 UTC m=+123.071160124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert") pod "cluster-autoscaler-operator-7ff449c7c5-cfvjb" (UID: "6b8dc5b8-3c48-4dba-9992-6e269ca133f1") : secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:06.287091 master-2 kubenswrapper[4776]: E1011 10:28:06.287070 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:06.287144 master-2 kubenswrapper[4776]: E1011 10:28:06.287098 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.287091295 +0000 UTC m=+123.071518004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:06.287144 master-2 kubenswrapper[4776]: E1011 10:28:06.287133 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:06.287239 master-2 kubenswrapper[4776]: E1011 10:28:06.287153 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.287144587 +0000 UTC m=+123.071571296 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:06.388087 master-2 kubenswrapper[4776]: I1011 10:28:06.388042 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:06.388268 master-2 kubenswrapper[4776]: I1011 10:28:06.388095 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:06.388268 master-2 kubenswrapper[4776]: I1011 10:28:06.388153 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:06.388268 master-2 kubenswrapper[4776]: I1011 10:28:06.388175 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:06.388268 master-2 kubenswrapper[4776]: I1011 10:28:06.388233 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:06.388268 master-2 kubenswrapper[4776]: I1011 10:28:06.388259 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:06.388435 master-2 kubenswrapper[4776]: I1011 10:28:06.388286 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:06.388435 master-2 kubenswrapper[4776]: I1011 10:28:06.388323 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:06.388435 master-2 kubenswrapper[4776]: I1011 10:28:06.388342 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:06.388435 master-2 kubenswrapper[4776]: I1011 10:28:06.388420 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:06.388539 master-2 kubenswrapper[4776]: I1011 10:28:06.388439 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:06.388539 master-2 kubenswrapper[4776]: I1011 10:28:06.388459 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:06.388539 master-2 kubenswrapper[4776]: I1011 10:28:06.388475 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:06.388539 master-2 kubenswrapper[4776]: I1011 10:28:06.388493 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:06.388539 master-2 kubenswrapper[4776]: I1011 10:28:06.388513 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:06.388539 master-2 kubenswrapper[4776]: I1011 10:28:06.388529 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:06.388701 master-2 kubenswrapper[4776]: I1011 10:28:06.388546 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:06.388820 master-2 kubenswrapper[4776]: E1011 10:28:06.388772 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:06.388860 master-2 kubenswrapper[4776]: E1011 10:28:06.388834 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.388817314 +0000 UTC m=+123.173244023 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:06.388953 master-2 kubenswrapper[4776]: E1011 10:28:06.388882 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:06.388953 master-2 kubenswrapper[4776]: E1011 10:28:06.388906 4776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 11 10:28:06.389018 master-2 kubenswrapper[4776]: E1011 10:28:06.388953 4776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:06.389018 master-2 kubenswrapper[4776]: E1011 10:28:06.388976 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.388948288 +0000 UTC m=+123.173374997 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:06.389018 master-2 kubenswrapper[4776]: E1011 10:28:06.389004 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls podName:b562963f-7112-411a-a64c-3b8eba909c59 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.388993309 +0000 UTC m=+123.173420088 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-mwbsr" (UID: "b562963f-7112-411a-a64c-3b8eba909c59") : secret "image-registry-operator-tls" not found Oct 11 10:28:06.389110 master-2 kubenswrapper[4776]: E1011 10:28:06.389021 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert podName:eba1e82e-9f3e-4273-836e-9407cc394b10 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.38901273 +0000 UTC m=+123.173439519 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-8d7xr" (UID: "eba1e82e-9f3e-4273-836e-9407cc394b10") : secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:06.389110 master-2 kubenswrapper[4776]: E1011 10:28:06.389027 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:06.389110 master-2 kubenswrapper[4776]: E1011 10:28:06.389073 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389052891 +0000 UTC m=+123.173479650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:06.389110 master-2 kubenswrapper[4776]: E1011 10:28:06.389077 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389114 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389102102 +0000 UTC m=+123.173528891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389075 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389130 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389153 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389144623 +0000 UTC m=+123.173571422 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389169 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389160384 +0000 UTC m=+123.173587163 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389173 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389205 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389197015 +0000 UTC m=+123.173623834 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389209 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389122 4776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:06.389234 master-2 kubenswrapper[4776]: E1011 10:28:06.389238 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389228126 +0000 UTC m=+123.173654915 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389250 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389253 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls podName:893af718-1fec-4b8b-8349-d85f978f4140 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389246626 +0000 UTC m=+123.173673445 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls") pod "dns-operator-7769d9677-wh775" (UID: "893af718-1fec-4b8b-8349-d85f978f4140") : secret "metrics-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389260 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389211 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389215 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389033 4776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389276 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389269417 +0000 UTC m=+123.173696126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "node-tuning-operator-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389414 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389404532 +0000 UTC m=+123.173831241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389424 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389418952 +0000 UTC m=+123.173845661 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389435 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389430162 +0000 UTC m=+123.173856871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389446 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls podName:08b7d4e3-1682-4a3b-a757-84ded3a16764 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389440643 +0000 UTC m=+123.173867352 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls") pod "machine-approver-7876f99457-h7hhv" (UID: "08b7d4e3-1682-4a3b-a757-84ded3a16764") : secret "machine-approver-tls" not found Oct 11 10:28:06.389506 master-2 kubenswrapper[4776]: E1011 10:28:06.389497 4776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:06.390006 master-2 kubenswrapper[4776]: E1011 10:28:06.389544 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls podName:6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c nodeName:}" failed. No retries permitted until 2025-10-11 10:28:08.389537335 +0000 UTC m=+123.173964044 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls") pod "ingress-operator-766ddf4575-wf7mj" (UID: "6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c") : secret "metrics-tls" not found Oct 11 10:28:06.459388 master-2 kubenswrapper[4776]: I1011 10:28:06.459257 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" event={"ID":"7004f3ff-6db8-446d-94c1-1223e975299d","Type":"ContainerStarted","Data":"5cfec723866b812f77a49c915767786d108ed192c33a71d16a214dbbfd2a0d46"} Oct 11 10:28:06.460638 master-2 kubenswrapper[4776]: I1011 10:28:06.460607 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" event={"ID":"58aef476-6586-47bb-bf45-dbeccac6271a","Type":"ContainerStarted","Data":"ac2feb73ac3d44ff07af99870af8e846b44254a04b4b3bda7aa0d54ef49c052c"} Oct 11 10:28:06.461565 master-2 kubenswrapper[4776]: I1011 10:28:06.461543 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" event={"ID":"05cf2994-c049-4f42-b2d8-83b23e7e763a","Type":"ContainerStarted","Data":"f26c0ef0c38264941d187bbda410fe98086c119977b5c40be0952dd4d38735f9"} Oct 11 10:28:06.463527 master-2 kubenswrapper[4776]: I1011 10:28:06.463480 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" event={"ID":"88129ec6-6f99-42a1-842a-6a965c6b58fe","Type":"ContainerStarted","Data":"effc2411f0b3e10834a868d9bfd6f868e3e9f1606e9d6043cd6d8654e3630f38"} Oct 11 10:28:06.465413 master-2 kubenswrapper[4776]: E1011 10:28:06.464893 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b\\\"\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" podUID="88129ec6-6f99-42a1-842a-6a965c6b58fe" Oct 11 10:28:06.465413 master-2 kubenswrapper[4776]: I1011 10:28:06.465192 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" event={"ID":"6967590c-695e-4e20-964b-0c643abdf367","Type":"ContainerStarted","Data":"18c2d67486e169d09512b65a5f4a23491bdfb755bad6884758580671b299c356"} Oct 11 10:28:06.466199 master-2 kubenswrapper[4776]: I1011 10:28:06.466167 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" event={"ID":"9d362fb9-48e4-4d72-a940-ec6c9c051fac","Type":"ContainerStarted","Data":"8927aa5a212997b84e6c2aa15861cb3f5032bda0e77b5b5d1174cff70042e0fe"} Oct 11 10:28:06.467096 master-2 kubenswrapper[4776]: I1011 10:28:06.467077 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5mn8b" event={"ID":"18ca0678-0b0d-4d5d-bc50-a0a098301f38","Type":"ContainerStarted","Data":"aff4fc2bae63d39167bfd5b06973cb1e2c2eed3757eaedf7fcff7ce5f143d716"} Oct 11 10:28:06.468043 master-2 kubenswrapper[4776]: E1011 10:28:06.468016 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a\\\"\"" pod="openshift-network-operator/iptables-alerter-5mn8b" podUID="18ca0678-0b0d-4d5d-bc50-a0a098301f38" Oct 11 10:28:06.468295 master-2 kubenswrapper[4776]: I1011 10:28:06.468252 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" event={"ID":"89e02bcb-b3fe-4a45-a531-4ab41d8ee424","Type":"ContainerStarted","Data":"f99465d5fca50b4458a90201f8a003a63aa0a3926c975edbb7c2e5699790ba29"} Oct 11 10:28:06.469045 master-2 kubenswrapper[4776]: I1011 10:28:06.469015 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" event={"ID":"e540333c-4b4d-439e-a82a-cd3a97c95a43","Type":"ContainerStarted","Data":"51185f5a5ef8be51d0b9fb54a45d8490768ada5cc0c176fc8916c38ad3293b36"} Oct 11 10:28:06.470261 master-2 kubenswrapper[4776]: I1011 10:28:06.470207 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-v6dfc" event={"ID":"8757af56-20fb-439e-adba-7e4e50378936","Type":"ContainerStarted","Data":"a75184c2ae2e40b8111ddce83c4348030deaf04dbe5bc245f6d525047856b81f"} Oct 11 10:28:06.470390 master-2 kubenswrapper[4776]: E1011 10:28:06.470361 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32\\\"\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" podUID="e540333c-4b4d-439e-a82a-cd3a97c95a43" Oct 11 10:28:06.471820 master-2 kubenswrapper[4776]: I1011 10:28:06.471796 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" event={"ID":"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1","Type":"ContainerStarted","Data":"ef035f600c6de6398c2351b00cea47d45cdc23afa3b46d4c5caee020d9ff82b6"} Oct 11 10:28:06.473583 master-2 kubenswrapper[4776]: I1011 10:28:06.473561 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" event={"ID":"a0b806b9-13ff-45fa-afba-5d0c89eac7df","Type":"ContainerStarted","Data":"efd656a1d8792a9b72e0b29d7f3bda39220cfc02fe075faa984b3373ff02bcd7"} Oct 11 10:28:06.474777 master-2 kubenswrapper[4776]: I1011 10:28:06.474756 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" event={"ID":"f8050d30-444b-40a5-829c-1e3b788910a0","Type":"ContainerStarted","Data":"a641dde88cb0eb910af8600e4b9aa4f67ea86f8b0a5f29ef7f742d66d5b7eb69"} Oct 11 10:28:06.475627 master-2 kubenswrapper[4776]: I1011 10:28:06.475606 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" event={"ID":"59763d5b-237f-4095-bf52-86bb0154381c","Type":"ContainerStarted","Data":"81917bbf0fa4d12b89f9108c22d49466b9b11f765e577bb4761536b9fd8d7328"} Oct 11 10:28:06.476651 master-2 kubenswrapper[4776]: E1011 10:28:06.476626 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce\\\"\"" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" podUID="59763d5b-237f-4095-bf52-86bb0154381c" Oct 11 10:28:07.479141 master-2 kubenswrapper[4776]: E1011 10:28:07.478887 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b\\\"\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" podUID="88129ec6-6f99-42a1-842a-6a965c6b58fe" Oct 11 10:28:07.479141 master-2 kubenswrapper[4776]: E1011 10:28:07.478893 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce\\\"\"" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" podUID="59763d5b-237f-4095-bf52-86bb0154381c" Oct 11 10:28:07.479141 master-2 kubenswrapper[4776]: E1011 10:28:07.478929 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32\\\"\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" podUID="e540333c-4b4d-439e-a82a-cd3a97c95a43" Oct 11 10:28:08.312323 master-2 kubenswrapper[4776]: I1011 10:28:08.312214 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: E1011 10:28:08.312419 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: I1011 10:28:08.312494 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: E1011 10:28:08.312515 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert podName:6b8dc5b8-3c48-4dba-9992-6e269ca133f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.312492428 +0000 UTC m=+127.096919137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert") pod "cluster-autoscaler-operator-7ff449c7c5-cfvjb" (UID: "6b8dc5b8-3c48-4dba-9992-6e269ca133f1") : secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: I1011 10:28:08.312535 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: E1011 10:28:08.312582 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: E1011 10:28:08.312609 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: E1011 10:28:08.312616 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.312605921 +0000 UTC m=+127.097032640 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:08.312740 master-2 kubenswrapper[4776]: E1011 10:28:08.312633 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.312627171 +0000 UTC m=+127.097053880 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:08.413662 master-2 kubenswrapper[4776]: I1011 10:28:08.413531 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:08.413662 master-2 kubenswrapper[4776]: I1011 10:28:08.413617 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:08.413662 master-2 kubenswrapper[4776]: I1011 10:28:08.413650 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:08.413662 master-2 kubenswrapper[4776]: I1011 10:28:08.413696 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: I1011 10:28:08.413738 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: I1011 10:28:08.413771 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413738 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: I1011 10:28:08.413818 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: I1011 10:28:08.413850 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413863 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413902 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413911 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413872 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.413851496 +0000 UTC m=+127.198278205 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413776 4776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413979 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.413991 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: E1011 10:28:08.414062 4776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 11 10:28:08.414057 master-2 kubenswrapper[4776]: I1011 10:28:08.414009 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.413782 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414037 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414020091 +0000 UTC m=+127.198446860 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414137 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414126804 +0000 UTC m=+127.198553513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414149 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414142354 +0000 UTC m=+127.198569063 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414161 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414156015 +0000 UTC m=+127.198582724 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414173 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls podName:893af718-1fec-4b8b-8349-d85f978f4140 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414168835 +0000 UTC m=+127.198595544 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls") pod "dns-operator-7769d9677-wh775" (UID: "893af718-1fec-4b8b-8349-d85f978f4140") : secret "metrics-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414183 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls podName:08b7d4e3-1682-4a3b-a757-84ded3a16764 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414178665 +0000 UTC m=+127.198605374 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls") pod "machine-approver-7876f99457-h7hhv" (UID: "08b7d4e3-1682-4a3b-a757-84ded3a16764") : secret "machine-approver-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414197 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414190326 +0000 UTC m=+127.198617025 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414226 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414253 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414287 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414277118 +0000 UTC m=+127.198703887 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414293 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414257 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414331 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414296139 +0000 UTC m=+127.198722838 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414362 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414394 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414428 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414431 4776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414469 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414472 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414440913 +0000 UTC m=+127.198867632 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414495 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414502 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414495464 +0000 UTC m=+127.198922173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414508 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414516 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414510035 +0000 UTC m=+127.198936744 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "node-tuning-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414559 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert podName:eba1e82e-9f3e-4273-836e-9407cc394b10 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414549516 +0000 UTC m=+127.198976325 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-8d7xr" (UID: "eba1e82e-9f3e-4273-836e-9407cc394b10") : secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414561 4776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414580 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414598 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls podName:b562963f-7112-411a-a64c-3b8eba909c59 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414588477 +0000 UTC m=+127.199015246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-mwbsr" (UID: "b562963f-7112-411a-a64c-3b8eba909c59") : secret "image-registry-operator-tls" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: I1011 10:28:08.414613 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414625 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:08.414625 master-2 kubenswrapper[4776]: E1011 10:28:08.414654 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414646019 +0000 UTC m=+127.199072788 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:08.416275 master-2 kubenswrapper[4776]: E1011 10:28:08.414729 4776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:08.416275 master-2 kubenswrapper[4776]: E1011 10:28:08.414768 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls podName:6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c nodeName:}" failed. No retries permitted until 2025-10-11 10:28:12.414756773 +0000 UTC m=+127.199183562 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls") pod "ingress-operator-766ddf4575-wf7mj" (UID: "6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c") : secret "metrics-tls" not found Oct 11 10:28:12.369259 master-2 kubenswrapper[4776]: I1011 10:28:12.369080 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:12.369259 master-2 kubenswrapper[4776]: I1011 10:28:12.369217 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: I1011 10:28:12.369314 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: E1011 10:28:12.369739 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: E1011 10:28:12.369836 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert podName:6b8dc5b8-3c48-4dba-9992-6e269ca133f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.369801569 +0000 UTC m=+135.154228318 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert") pod "cluster-autoscaler-operator-7ff449c7c5-cfvjb" (UID: "6b8dc5b8-3c48-4dba-9992-6e269ca133f1") : secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: E1011 10:28:12.369860 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: E1011 10:28:12.369899 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: E1011 10:28:12.369925 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.369906662 +0000 UTC m=+135.154333381 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:12.370562 master-2 kubenswrapper[4776]: E1011 10:28:12.369955 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.369937683 +0000 UTC m=+135.154364412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:12.470567 master-2 kubenswrapper[4776]: I1011 10:28:12.470457 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:12.470567 master-2 kubenswrapper[4776]: I1011 10:28:12.470578 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: I1011 10:28:12.470631 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: I1011 10:28:12.470743 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: I1011 10:28:12.470796 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: E1011 10:28:12.470747 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: I1011 10:28:12.470866 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: I1011 10:28:12.470904 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:12.471003 master-2 kubenswrapper[4776]: E1011 10:28:12.470903 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.470945 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.47091375 +0000 UTC m=+135.255340479 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471053 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471115 4776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471138 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471114226 +0000 UTC m=+135.255540975 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471050 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.470820 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471050 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: I1011 10:28:12.471139 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471192 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471160008 +0000 UTC m=+135.255586737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471238 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471355 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls podName:08b7d4e3-1682-4a3b-a757-84ded3a16764 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471310363 +0000 UTC m=+135.255737262 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls") pod "machine-approver-7876f99457-h7hhv" (UID: "08b7d4e3-1682-4a3b-a757-84ded3a16764") : secret "machine-approver-tls" not found Oct 11 10:28:12.471376 master-2 kubenswrapper[4776]: E1011 10:28:12.471385 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471371524 +0000 UTC m=+135.255798463 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471409 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471399395 +0000 UTC m=+135.255826324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471437 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471421256 +0000 UTC m=+135.255848185 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.471490 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.471539 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.471597 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471631 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471611551 +0000 UTC m=+135.256038410 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.471666 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471730 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471740 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.471744 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471788 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471772406 +0000 UTC m=+135.256199305 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471814 4776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471835 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471821197 +0000 UTC m=+135.256248096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "node-tuning-operator-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471835 4776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.471868 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471933 4776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471812 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.471883 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls podName:b562963f-7112-411a-a64c-3b8eba909c59 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.471864478 +0000 UTC m=+135.256291227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-mwbsr" (UID: "b562963f-7112-411a-a64c-3b8eba909c59") : secret "image-registry-operator-tls" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: I1011 10:28:12.472059 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.472093 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert podName:eba1e82e-9f3e-4273-836e-9407cc394b10 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.472064844 +0000 UTC m=+135.256491723 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-8d7xr" (UID: "eba1e82e-9f3e-4273-836e-9407cc394b10") : secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:12.472082 master-2 kubenswrapper[4776]: E1011 10:28:12.472125 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls podName:6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.472115636 +0000 UTC m=+135.256542575 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls") pod "ingress-operator-766ddf4575-wf7mj" (UID: "6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c") : secret "metrics-tls" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472149 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472161 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.472147976 +0000 UTC m=+135.256574925 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: I1011 10:28:12.472203 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472215 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.472197058 +0000 UTC m=+135.256623927 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: I1011 10:28:12.472245 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472364 4776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472417 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472455 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls podName:893af718-1fec-4b8b-8349-d85f978f4140 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.472440335 +0000 UTC m=+135.256867174 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls") pod "dns-operator-7769d9677-wh775" (UID: "893af718-1fec-4b8b-8349-d85f978f4140") : secret "metrics-tls" not found Oct 11 10:28:12.473357 master-2 kubenswrapper[4776]: E1011 10:28:12.472489 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:20.472475706 +0000 UTC m=+135.256902625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:13.130027 master-2 kubenswrapper[4776]: I1011 10:28:13.129971 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x5wg8" Oct 11 10:28:17.254614 master-2 kubenswrapper[4776]: I1011 10:28:17.254304 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Oct 11 10:28:17.266327 master-2 kubenswrapper[4776]: I1011 10:28:17.266103 4776 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Oct 11 10:28:17.508983 master-2 kubenswrapper[4776]: I1011 10:28:17.508934 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" event={"ID":"9d362fb9-48e4-4d72-a940-ec6c9c051fac","Type":"ContainerDied","Data":"e90f7250992a43c127322ebe6a88091226718110bb2803a9ad4004b18fa488dd"} Oct 11 10:28:17.509383 master-2 kubenswrapper[4776]: I1011 10:28:17.508656 4776 generic.go:334] "Generic (PLEG): container finished" podID="9d362fb9-48e4-4d72-a940-ec6c9c051fac" containerID="e90f7250992a43c127322ebe6a88091226718110bb2803a9ad4004b18fa488dd" exitCode=0 Oct 11 10:28:17.511532 master-2 kubenswrapper[4776]: I1011 10:28:17.511506 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" event={"ID":"7004f3ff-6db8-446d-94c1-1223e975299d","Type":"ContainerStarted","Data":"13931b8d42a71308bf45f4bd6921b1ab789c1a4f3b0b726209cf504aecb722a9"} Oct 11 10:28:17.513994 master-2 kubenswrapper[4776]: I1011 10:28:17.513716 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" event={"ID":"a0b806b9-13ff-45fa-afba-5d0c89eac7df","Type":"ContainerStarted","Data":"ef0b776ca5352b516fbbf8012bd62838aed8c9c935aab5fafdd14b5c301abac5"} Oct 11 10:28:17.515651 master-2 kubenswrapper[4776]: I1011 10:28:17.515627 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" event={"ID":"05cf2994-c049-4f42-b2d8-83b23e7e763a","Type":"ContainerStarted","Data":"09c81be857efc36ce4d8daa4c934f1437649343613c3da9aac63b8db86978ed6"} Oct 11 10:28:17.518375 master-2 kubenswrapper[4776]: I1011 10:28:17.518337 4776 generic.go:334] "Generic (PLEG): container finished" podID="e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc" containerID="cd8822d95b19957043a12128b0929e8211cff636608b79c99c54fc322091c398" exitCode=0 Oct 11 10:28:17.518440 master-2 kubenswrapper[4776]: I1011 10:28:17.518379 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerDied","Data":"cd8822d95b19957043a12128b0929e8211cff636608b79c99c54fc322091c398"} Oct 11 10:28:17.520688 master-2 kubenswrapper[4776]: I1011 10:28:17.520657 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" event={"ID":"89e02bcb-b3fe-4a45-a531-4ab41d8ee424","Type":"ContainerStarted","Data":"2311ef45db3839160f50cf52dfc54b1dab3ed31b8a810ff4165ecab2eb84274b"} Oct 11 10:28:17.522164 master-2 kubenswrapper[4776]: I1011 10:28:17.522143 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" event={"ID":"f8050d30-444b-40a5-829c-1e3b788910a0","Type":"ContainerStarted","Data":"5001bd6df546d1ceae6c934b8abd9de1f6f93838b1e654bff89ff6b24eb56ca9"} Oct 11 10:28:17.547485 master-2 kubenswrapper[4776]: I1011 10:28:17.547398 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" podStartSLOduration=69.923064333 podStartE2EDuration="1m21.54737089s" podCreationTimestamp="2025-10-11 10:26:56 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.592536635 +0000 UTC m=+120.376963344" lastFinishedPulling="2025-10-11 10:28:17.216843192 +0000 UTC m=+132.001269901" observedRunningTime="2025-10-11 10:28:17.544261629 +0000 UTC m=+132.328688338" watchObservedRunningTime="2025-10-11 10:28:17.54737089 +0000 UTC m=+132.331797619" Oct 11 10:28:17.581367 master-2 kubenswrapper[4776]: I1011 10:28:17.581275 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" podStartSLOduration=68.964024381 podStartE2EDuration="1m20.581253285s" podCreationTimestamp="2025-10-11 10:26:57 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.608591887 +0000 UTC m=+120.393018596" lastFinishedPulling="2025-10-11 10:28:17.225820791 +0000 UTC m=+132.010247500" observedRunningTime="2025-10-11 10:28:17.580132932 +0000 UTC m=+132.364559631" watchObservedRunningTime="2025-10-11 10:28:17.581253285 +0000 UTC m=+132.365679984" Oct 11 10:28:17.618464 master-2 kubenswrapper[4776]: I1011 10:28:17.618037 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8" podStartSLOduration=99.990421182 podStartE2EDuration="1m51.618015354s" podCreationTimestamp="2025-10-11 10:26:26 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.593329217 +0000 UTC m=+120.377755926" lastFinishedPulling="2025-10-11 10:28:17.220923379 +0000 UTC m=+132.005350098" observedRunningTime="2025-10-11 10:28:17.5967199 +0000 UTC m=+132.381146619" watchObservedRunningTime="2025-10-11 10:28:17.618015354 +0000 UTC m=+132.402442073" Oct 11 10:28:17.633201 master-2 kubenswrapper[4776]: I1011 10:28:17.633137 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" podStartSLOduration=69.099223355 podStartE2EDuration="1m20.633117988s" podCreationTimestamp="2025-10-11 10:26:57 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.683995339 +0000 UTC m=+120.468422048" lastFinishedPulling="2025-10-11 10:28:17.217889932 +0000 UTC m=+132.002316681" observedRunningTime="2025-10-11 10:28:17.631736079 +0000 UTC m=+132.416162788" watchObservedRunningTime="2025-10-11 10:28:17.633117988 +0000 UTC m=+132.417544697" Oct 11 10:28:17.654140 master-2 kubenswrapper[4776]: I1011 10:28:17.654067 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" podStartSLOduration=68.167568224 podStartE2EDuration="1m19.654044331s" podCreationTimestamp="2025-10-11 10:26:58 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.731883138 +0000 UTC m=+120.516309837" lastFinishedPulling="2025-10-11 10:28:17.218359235 +0000 UTC m=+132.002785944" observedRunningTime="2025-10-11 10:28:17.654022851 +0000 UTC m=+132.438449560" watchObservedRunningTime="2025-10-11 10:28:17.654044331 +0000 UTC m=+132.438471040" Oct 11 10:28:17.845649 master-2 kubenswrapper[4776]: I1011 10:28:17.845524 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:28:17.849875 master-2 kubenswrapper[4776]: I1011 10:28:17.849824 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 11 10:28:17.856310 master-2 kubenswrapper[4776]: E1011 10:28:17.856259 4776 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Oct 11 10:28:17.856399 master-2 kubenswrapper[4776]: E1011 10:28:17.856351 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs podName:35b21a7b-2a5a-4511-a2d5-d950752b4bda nodeName:}" failed. No retries permitted until 2025-10-11 10:29:21.856327346 +0000 UTC m=+196.640754055 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs") pod "network-metrics-daemon-w52cn" (UID: "35b21a7b-2a5a-4511-a2d5-d950752b4bda") : secret "metrics-daemon-secret" not found Oct 11 10:28:18.081715 master-2 kubenswrapper[4776]: I1011 10:28:18.081663 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/master-2-debug-gpmgw"] Oct 11 10:28:18.085691 master-2 kubenswrapper[4776]: I1011 10:28:18.082064 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.148571 master-2 kubenswrapper[4776]: I1011 10:28:18.148460 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj76g\" (UniqueName: \"kubernetes.io/projected/02b839f3-9031-49c2-87a5-630975c7e14c-kube-api-access-xj76g\") pod \"master-2-debug-gpmgw\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.148779 master-2 kubenswrapper[4776]: I1011 10:28:18.148755 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b839f3-9031-49c2-87a5-630975c7e14c-host\") pod \"master-2-debug-gpmgw\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.250390 master-2 kubenswrapper[4776]: I1011 10:28:18.250014 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b839f3-9031-49c2-87a5-630975c7e14c-host\") pod \"master-2-debug-gpmgw\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.250390 master-2 kubenswrapper[4776]: I1011 10:28:18.250398 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj76g\" (UniqueName: \"kubernetes.io/projected/02b839f3-9031-49c2-87a5-630975c7e14c-kube-api-access-xj76g\") pod \"master-2-debug-gpmgw\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.250805 master-2 kubenswrapper[4776]: I1011 10:28:18.250746 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b839f3-9031-49c2-87a5-630975c7e14c-host\") pod \"master-2-debug-gpmgw\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.278071 master-2 kubenswrapper[4776]: I1011 10:28:18.278019 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj76g\" (UniqueName: \"kubernetes.io/projected/02b839f3-9031-49c2-87a5-630975c7e14c-kube-api-access-xj76g\") pod \"master-2-debug-gpmgw\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.398742 master-2 kubenswrapper[4776]: I1011 10:28:18.398113 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:18.418371 master-2 kubenswrapper[4776]: W1011 10:28:18.418281 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02b839f3_9031_49c2_87a5_630975c7e14c.slice/crio-ac1317205de0639804fa13c50df1e4818a9d1c3375597958265c0b50192bde59 WatchSource:0}: Error finding container ac1317205de0639804fa13c50df1e4818a9d1c3375597958265c0b50192bde59: Status 404 returned error can't find the container with id ac1317205de0639804fa13c50df1e4818a9d1c3375597958265c0b50192bde59 Oct 11 10:28:18.530989 master-2 kubenswrapper[4776]: I1011 10:28:18.530745 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" event={"ID":"6967590c-695e-4e20-964b-0c643abdf367","Type":"ContainerStarted","Data":"e3b061ce9d0eb2a283f25a5377c1ec78f61f62a5f692f1f7bc57aa0c47f8c828"} Oct 11 10:28:18.532590 master-2 kubenswrapper[4776]: I1011 10:28:18.532546 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/master-2-debug-gpmgw" event={"ID":"02b839f3-9031-49c2-87a5-630975c7e14c","Type":"ContainerStarted","Data":"ac1317205de0639804fa13c50df1e4818a9d1c3375597958265c0b50192bde59"} Oct 11 10:28:18.534145 master-2 kubenswrapper[4776]: I1011 10:28:18.533637 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" event={"ID":"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1","Type":"ContainerStarted","Data":"0b7bb22c9bcc10fdcba6be60dad53b1b80998cc309ea84f073feff75133d2485"} Oct 11 10:28:18.536241 master-2 kubenswrapper[4776]: I1011 10:28:18.536217 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" event={"ID":"e487f283-7482-463c-90b6-a812e00d0e35","Type":"ContainerStarted","Data":"75e09f57e9f3d2d1f9408b0cb83b216f2432311fdbe734afce1ac9bd82b32464"} Oct 11 10:28:18.537589 master-2 kubenswrapper[4776]: I1011 10:28:18.537564 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-v6dfc" event={"ID":"8757af56-20fb-439e-adba-7e4e50378936","Type":"ContainerStarted","Data":"25fc39e758a2899d86cad41cf89dd130d8c1f8d7d2271b02d90a5c1db60a0fae"} Oct 11 10:28:18.539169 master-2 kubenswrapper[4776]: I1011 10:28:18.539134 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" event={"ID":"58aef476-6586-47bb-bf45-dbeccac6271a","Type":"ContainerStarted","Data":"4086f54b40e82a9c1520dd01a01f1e17aa8e4bfa53d48bc75f9b65494739f67c"} Oct 11 10:28:18.545444 master-2 kubenswrapper[4776]: I1011 10:28:18.545386 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" podStartSLOduration=97.965610083 podStartE2EDuration="1m49.545375898s" podCreationTimestamp="2025-10-11 10:26:29 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.620222642 +0000 UTC m=+120.404649341" lastFinishedPulling="2025-10-11 10:28:17.199988447 +0000 UTC m=+131.984415156" observedRunningTime="2025-10-11 10:28:18.54375117 +0000 UTC m=+133.328177879" watchObservedRunningTime="2025-10-11 10:28:18.545375898 +0000 UTC m=+133.329802607" Oct 11 10:28:18.555873 master-2 kubenswrapper[4776]: I1011 10:28:18.555807 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" podStartSLOduration=73.704323249 podStartE2EDuration="1m25.555793298s" podCreationTimestamp="2025-10-11 10:26:53 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.331738565 +0000 UTC m=+120.116165274" lastFinishedPulling="2025-10-11 10:28:17.183208614 +0000 UTC m=+131.967635323" observedRunningTime="2025-10-11 10:28:18.554865741 +0000 UTC m=+133.339292450" watchObservedRunningTime="2025-10-11 10:28:18.555793298 +0000 UTC m=+133.340220007" Oct 11 10:28:18.568733 master-2 kubenswrapper[4776]: I1011 10:28:18.567837 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="assisted-installer/assisted-installer-controller-v6dfc" podStartSLOduration=212.108560161 podStartE2EDuration="3m43.567816914s" podCreationTimestamp="2025-10-11 10:24:35 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.79271464 +0000 UTC m=+120.577141349" lastFinishedPulling="2025-10-11 10:28:17.251971393 +0000 UTC m=+132.036398102" observedRunningTime="2025-10-11 10:28:18.566079003 +0000 UTC m=+133.350505722" watchObservedRunningTime="2025-10-11 10:28:18.567816914 +0000 UTC m=+133.352243623" Oct 11 10:28:18.582706 master-2 kubenswrapper[4776]: I1011 10:28:18.578804 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" podStartSLOduration=69.997017828 podStartE2EDuration="1m21.57878701s" podCreationTimestamp="2025-10-11 10:26:57 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.636779899 +0000 UTC m=+120.421206608" lastFinishedPulling="2025-10-11 10:28:17.218549081 +0000 UTC m=+132.002975790" observedRunningTime="2025-10-11 10:28:18.577623316 +0000 UTC m=+133.362050025" watchObservedRunningTime="2025-10-11 10:28:18.57878701 +0000 UTC m=+133.363213719" Oct 11 10:28:18.600737 master-2 kubenswrapper[4776]: I1011 10:28:18.597642 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" podStartSLOduration=71.180854421 podStartE2EDuration="1m22.597622732s" podCreationTimestamp="2025-10-11 10:26:56 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.801023088 +0000 UTC m=+120.585449797" lastFinishedPulling="2025-10-11 10:28:17.217791399 +0000 UTC m=+132.002218108" observedRunningTime="2025-10-11 10:28:18.590394594 +0000 UTC m=+133.374821303" watchObservedRunningTime="2025-10-11 10:28:18.597622732 +0000 UTC m=+133.382049441" Oct 11 10:28:18.840915 master-2 kubenswrapper[4776]: I1011 10:28:18.840846 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49"] Oct 11 10:28:18.841435 master-2 kubenswrapper[4776]: I1011 10:28:18.841404 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" Oct 11 10:28:18.849136 master-2 kubenswrapper[4776]: I1011 10:28:18.849065 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49"] Oct 11 10:28:18.961946 master-2 kubenswrapper[4776]: I1011 10:28:18.961883 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djkhs\" (UniqueName: \"kubernetes.io/projected/b5b27c80-52a3-4747-a128-28952a667faa-kube-api-access-djkhs\") pod \"csi-snapshot-controller-ddd7d64cd-95l49\" (UID: \"b5b27c80-52a3-4747-a128-28952a667faa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" Oct 11 10:28:19.062834 master-2 kubenswrapper[4776]: I1011 10:28:19.062803 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djkhs\" (UniqueName: \"kubernetes.io/projected/b5b27c80-52a3-4747-a128-28952a667faa-kube-api-access-djkhs\") pod \"csi-snapshot-controller-ddd7d64cd-95l49\" (UID: \"b5b27c80-52a3-4747-a128-28952a667faa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" Oct 11 10:28:19.082299 master-2 kubenswrapper[4776]: I1011 10:28:19.082244 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djkhs\" (UniqueName: \"kubernetes.io/projected/b5b27c80-52a3-4747-a128-28952a667faa-kube-api-access-djkhs\") pod \"csi-snapshot-controller-ddd7d64cd-95l49\" (UID: \"b5b27c80-52a3-4747-a128-28952a667faa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" Oct 11 10:28:19.156371 master-2 kubenswrapper[4776]: I1011 10:28:19.156321 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" Oct 11 10:28:20.381076 master-2 kubenswrapper[4776]: I1011 10:28:20.381035 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: I1011 10:28:20.381078 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: I1011 10:28:20.381124 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: E1011 10:28:20.381243 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: E1011 10:28:20.381321 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: E1011 10:28:20.381332 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.381304643 +0000 UTC m=+151.165731532 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: E1011 10:28:20.381331 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: E1011 10:28:20.381504 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.381479308 +0000 UTC m=+151.165906187 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:20.381711 master-2 kubenswrapper[4776]: E1011 10:28:20.381603 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert podName:6b8dc5b8-3c48-4dba-9992-6e269ca133f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.381566411 +0000 UTC m=+151.165993120 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert") pod "cluster-autoscaler-operator-7ff449c7c5-cfvjb" (UID: "6b8dc5b8-3c48-4dba-9992-6e269ca133f1") : secret "cluster-autoscaler-operator-cert" not found Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493150 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493731 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493769 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493796 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493822 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493849 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493913 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493948 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.493982 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494012 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494048 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494076 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494129 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494171 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494200 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494221 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: I1011 10:28:20.494243 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: E1011 10:28:20.496654 4776 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:20.497282 master-2 kubenswrapper[4776]: E1011 10:28:20.496759 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls podName:893af718-1fec-4b8b-8349-d85f978f4140 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.496735057 +0000 UTC m=+151.281161766 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls") pod "dns-operator-7769d9677-wh775" (UID: "893af718-1fec-4b8b-8349-d85f978f4140") : secret "metrics-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497604 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497649 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497636524 +0000 UTC m=+151.282063233 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497713 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497768 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497816 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497789128 +0000 UTC m=+151.282216017 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "performance-addon-operator-webhook-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497833 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497845 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497831179 +0000 UTC m=+151.282258078 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497867 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.49785767 +0000 UTC m=+151.282284379 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497899 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497906 4776 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497926 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497918352 +0000 UTC m=+151.282345061 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497925 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497945 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert podName:eba1e82e-9f3e-4273-836e-9407cc394b10 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497934412 +0000 UTC m=+151.282361111 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-8d7xr" (UID: "eba1e82e-9f3e-4273-836e-9407cc394b10") : secret "cloud-credential-operator-serving-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497958 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497953163 +0000 UTC m=+151.282379872 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497722 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497975 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497990 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.497981273 +0000 UTC m=+151.282407982 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.497990 4776 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498010 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498000744 +0000 UTC m=+151.282427683 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498030 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls podName:6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498024105 +0000 UTC m=+151.282451024 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls") pod "ingress-operator-766ddf4575-wf7mj" (UID: "6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c") : secret "metrics-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498031 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498062 4776 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498073 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498065796 +0000 UTC m=+151.282492715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498089 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls podName:b562963f-7112-411a-a64c-3b8eba909c59 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498082226 +0000 UTC m=+151.282508935 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-mwbsr" (UID: "b562963f-7112-411a-a64c-3b8eba909c59") : secret "image-registry-operator-tls" not found Oct 11 10:28:20.498087 master-2 kubenswrapper[4776]: E1011 10:28:20.498102 4776 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 11 10:28:20.498991 master-2 kubenswrapper[4776]: E1011 10:28:20.498126 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls podName:b16a4f10-c724-43cf-acd4-b3f5aa575653 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498119167 +0000 UTC m=+151.282545876 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-js8sj" (UID: "b16a4f10-c724-43cf-acd4-b3f5aa575653") : secret "node-tuning-operator-tls" not found Oct 11 10:28:20.498991 master-2 kubenswrapper[4776]: E1011 10:28:20.498130 4776 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 11 10:28:20.498991 master-2 kubenswrapper[4776]: E1011 10:28:20.498152 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls podName:08b7d4e3-1682-4a3b-a757-84ded3a16764 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498144148 +0000 UTC m=+151.282570857 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls") pod "machine-approver-7876f99457-h7hhv" (UID: "08b7d4e3-1682-4a3b-a757-84ded3a16764") : secret "machine-approver-tls" not found Oct 11 10:28:20.498991 master-2 kubenswrapper[4776]: E1011 10:28:20.498162 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:20.498991 master-2 kubenswrapper[4776]: E1011 10:28:20.498191 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498181399 +0000 UTC m=+151.282608108 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:20.498991 master-2 kubenswrapper[4776]: E1011 10:28:20.498889 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:20.499211 master-2 kubenswrapper[4776]: E1011 10:28:20.499021 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.498991852 +0000 UTC m=+151.283418741 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:20.548018 master-2 kubenswrapper[4776]: I1011 10:28:20.547944 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49"] Oct 11 10:28:20.553135 master-2 kubenswrapper[4776]: I1011 10:28:20.553084 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerStarted","Data":"960e453a9267449ffe7e9c7dd5e312b5b4a9b57933093dc218b7400eca3f6b59"} Oct 11 10:28:20.556489 master-2 kubenswrapper[4776]: I1011 10:28:20.556295 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" event={"ID":"9d362fb9-48e4-4d72-a940-ec6c9c051fac","Type":"ContainerStarted","Data":"53f868ac8b0d0bcce62eb761aa1d944f2aeff4c3ea9d582cec7865a78d5991fa"} Oct 11 10:28:20.556994 master-2 kubenswrapper[4776]: I1011 10:28:20.556953 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:20.589776 master-2 kubenswrapper[4776]: I1011 10:28:20.589664 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" podStartSLOduration=94.796118568 podStartE2EDuration="1m49.589640493s" podCreationTimestamp="2025-10-11 10:26:31 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.582013132 +0000 UTC m=+120.366439841" lastFinishedPulling="2025-10-11 10:28:20.375535057 +0000 UTC m=+135.159961766" observedRunningTime="2025-10-11 10:28:20.58433232 +0000 UTC m=+135.368759039" watchObservedRunningTime="2025-10-11 10:28:20.589640493 +0000 UTC m=+135.374067202" Oct 11 10:28:20.621991 master-2 kubenswrapper[4776]: W1011 10:28:20.621785 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5b27c80_52a3_4747_a128_28952a667faa.slice/crio-d1d6bf2f2a56f97f6b140f03d0a6fae4eeb9d2c208dfb81f5ebd257f0612b3cc WatchSource:0}: Error finding container d1d6bf2f2a56f97f6b140f03d0a6fae4eeb9d2c208dfb81f5ebd257f0612b3cc: Status 404 returned error can't find the container with id d1d6bf2f2a56f97f6b140f03d0a6fae4eeb9d2c208dfb81f5ebd257f0612b3cc Oct 11 10:28:21.563594 master-2 kubenswrapper[4776]: I1011 10:28:21.563083 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" event={"ID":"b5b27c80-52a3-4747-a128-28952a667faa","Type":"ContainerStarted","Data":"d1d6bf2f2a56f97f6b140f03d0a6fae4eeb9d2c208dfb81f5ebd257f0612b3cc"} Oct 11 10:28:21.567976 master-2 kubenswrapper[4776]: I1011 10:28:21.567901 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerDied","Data":"960e453a9267449ffe7e9c7dd5e312b5b4a9b57933093dc218b7400eca3f6b59"} Oct 11 10:28:21.568528 master-2 kubenswrapper[4776]: I1011 10:28:21.568471 4776 generic.go:334] "Generic (PLEG): container finished" podID="e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc" containerID="960e453a9267449ffe7e9c7dd5e312b5b4a9b57933093dc218b7400eca3f6b59" exitCode=0 Oct 11 10:28:23.342045 master-2 kubenswrapper[4776]: I1011 10:28:23.341991 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:28:29.597753 master-2 kubenswrapper[4776]: I1011 10:28:29.597481 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" event={"ID":"b5b27c80-52a3-4747-a128-28952a667faa","Type":"ContainerStarted","Data":"c8a8c52b73cf91ea6bba79404a35830c95d980291188b0aaa7590f6318351fe4"} Oct 11 10:28:29.600233 master-2 kubenswrapper[4776]: I1011 10:28:29.599980 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerStarted","Data":"79dc22f0a550f7a03fdfde0714643e7ddafbfdc868d34604a3c34fe38c3b91d6"} Oct 11 10:28:29.601828 master-2 kubenswrapper[4776]: I1011 10:28:29.601798 4776 generic.go:334] "Generic (PLEG): container finished" podID="02b839f3-9031-49c2-87a5-630975c7e14c" containerID="84ee2aa257fd47e52819ac5b509341f1915d01766245c38ccb1a2b7cae91293a" exitCode=0 Oct 11 10:28:29.601907 master-2 kubenswrapper[4776]: I1011 10:28:29.601870 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/master-2-debug-gpmgw" event={"ID":"02b839f3-9031-49c2-87a5-630975c7e14c","Type":"ContainerDied","Data":"84ee2aa257fd47e52819ac5b509341f1915d01766245c38ccb1a2b7cae91293a"} Oct 11 10:28:29.603498 master-2 kubenswrapper[4776]: I1011 10:28:29.603061 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" event={"ID":"59763d5b-237f-4095-bf52-86bb0154381c","Type":"ContainerStarted","Data":"3ce0e4c23d3462cc28a54aa78bddda37020e10bc5a0b28d2d4d54aa602abe170"} Oct 11 10:28:29.604733 master-2 kubenswrapper[4776]: I1011 10:28:29.604689 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" event={"ID":"e540333c-4b4d-439e-a82a-cd3a97c95a43","Type":"ContainerStarted","Data":"0ba5d510196688f6b97d8a36964cc97a744fb54a3c5e03a38ad0b42712671103"} Oct 11 10:28:29.605837 master-2 kubenswrapper[4776]: I1011 10:28:29.605801 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" event={"ID":"88129ec6-6f99-42a1-842a-6a965c6b58fe","Type":"ContainerStarted","Data":"e266c7a2e3d240b36e8aa83f32c98d86c0362e7f150797bd2e151f66b7e2430e"} Oct 11 10:28:29.615797 master-2 kubenswrapper[4776]: I1011 10:28:29.612708 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49" podStartSLOduration=3.679856145 podStartE2EDuration="11.612691606s" podCreationTimestamp="2025-10-11 10:28:18 +0000 UTC" firstStartedPulling="2025-10-11 10:28:20.624388183 +0000 UTC m=+135.408814892" lastFinishedPulling="2025-10-11 10:28:28.557223644 +0000 UTC m=+143.341650353" observedRunningTime="2025-10-11 10:28:29.610367289 +0000 UTC m=+144.394793998" watchObservedRunningTime="2025-10-11 10:28:29.612691606 +0000 UTC m=+144.397118315" Oct 11 10:28:29.626726 master-2 kubenswrapper[4776]: I1011 10:28:29.626604 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" podStartSLOduration=70.39977609 podStartE2EDuration="1m33.626584016s" podCreationTimestamp="2025-10-11 10:26:56 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.345731178 +0000 UTC m=+120.130157887" lastFinishedPulling="2025-10-11 10:28:28.572539104 +0000 UTC m=+143.356965813" observedRunningTime="2025-10-11 10:28:29.625037262 +0000 UTC m=+144.409463991" watchObservedRunningTime="2025-10-11 10:28:29.626584016 +0000 UTC m=+144.411010725" Oct 11 10:28:29.643256 master-2 kubenswrapper[4776]: I1011 10:28:29.643206 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" podStartSLOduration=94.662210773 podStartE2EDuration="1m56.643188704s" podCreationTimestamp="2025-10-11 10:26:33 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.822105946 +0000 UTC m=+120.606532655" lastFinishedPulling="2025-10-11 10:28:27.803083877 +0000 UTC m=+142.587510586" observedRunningTime="2025-10-11 10:28:29.64166159 +0000 UTC m=+144.426088309" watchObservedRunningTime="2025-10-11 10:28:29.643188704 +0000 UTC m=+144.427615413" Oct 11 10:28:29.753293 master-2 kubenswrapper[4776]: I1011 10:28:29.753217 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" podStartSLOduration=94.089606485 podStartE2EDuration="1m56.753170692s" podCreationTimestamp="2025-10-11 10:26:33 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.853914861 +0000 UTC m=+120.638341570" lastFinishedPulling="2025-10-11 10:28:28.517479028 +0000 UTC m=+143.301905777" observedRunningTime="2025-10-11 10:28:29.66041629 +0000 UTC m=+144.444843009" watchObservedRunningTime="2025-10-11 10:28:29.753170692 +0000 UTC m=+144.537597401" Oct 11 10:28:29.760810 master-2 kubenswrapper[4776]: I1011 10:28:29.760761 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" podStartSLOduration=70.139045478 podStartE2EDuration="1m32.760742759s" podCreationTimestamp="2025-10-11 10:26:57 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.895714645 +0000 UTC m=+120.680141354" lastFinishedPulling="2025-10-11 10:28:28.517411926 +0000 UTC m=+143.301838635" observedRunningTime="2025-10-11 10:28:29.750490104 +0000 UTC m=+144.534916813" watchObservedRunningTime="2025-10-11 10:28:29.760742759 +0000 UTC m=+144.545169468" Oct 11 10:28:29.784567 master-2 kubenswrapper[4776]: I1011 10:28:29.784525 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["assisted-installer/master-2-debug-gpmgw"] Oct 11 10:28:29.786758 master-2 kubenswrapper[4776]: I1011 10:28:29.786732 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["assisted-installer/master-2-debug-gpmgw"] Oct 11 10:28:30.068829 master-2 kubenswrapper[4776]: I1011 10:28:30.068791 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-k46j4"] Oct 11 10:28:30.069423 master-2 kubenswrapper[4776]: E1011 10:28:30.069406 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b839f3-9031-49c2-87a5-630975c7e14c" containerName="container-00" Oct 11 10:28:30.069503 master-2 kubenswrapper[4776]: I1011 10:28:30.069493 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b839f3-9031-49c2-87a5-630975c7e14c" containerName="container-00" Oct 11 10:28:30.069654 master-2 kubenswrapper[4776]: I1011 10:28:30.069641 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b839f3-9031-49c2-87a5-630975c7e14c" containerName="container-00" Oct 11 10:28:30.070560 master-2 kubenswrapper[4776]: I1011 10:28:30.070539 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.073355 master-2 kubenswrapper[4776]: I1011 10:28:30.073322 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:28:30.073583 master-2 kubenswrapper[4776]: I1011 10:28:30.073561 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:28:30.073833 master-2 kubenswrapper[4776]: I1011 10:28:30.073780 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:28:30.074021 master-2 kubenswrapper[4776]: I1011 10:28:30.074006 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Oct 11 10:28:30.074534 master-2 kubenswrapper[4776]: I1011 10:28:30.074512 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:28:30.075980 master-2 kubenswrapper[4776]: I1011 10:28:30.075953 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Oct 11 10:28:30.076558 master-2 kubenswrapper[4776]: I1011 10:28:30.076541 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:28:30.076722 master-2 kubenswrapper[4776]: I1011 10:28:30.076659 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:28:30.076951 master-2 kubenswrapper[4776]: I1011 10:28:30.076930 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:28:30.084742 master-2 kubenswrapper[4776]: I1011 10:28:30.082537 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:28:30.084742 master-2 kubenswrapper[4776]: I1011 10:28:30.084192 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-k46j4"] Oct 11 10:28:30.177049 master-2 kubenswrapper[4776]: I1011 10:28:30.176944 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-client\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.177578 master-2 kubenswrapper[4776]: I1011 10:28:30.177552 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-serving-ca\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.177715 master-2 kubenswrapper[4776]: I1011 10:28:30.177658 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxmhq\" (UniqueName: \"kubernetes.io/projected/9bf5fcc5-d60e-45da-976d-56ac881274f1-kube-api-access-lxmhq\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.177843 master-2 kubenswrapper[4776]: I1011 10:28:30.177824 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.177918 master-2 kubenswrapper[4776]: I1011 10:28:30.177904 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-image-import-ca\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.178043 master-2 kubenswrapper[4776]: I1011 10:28:30.178029 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-node-pullsecrets\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.178122 master-2 kubenswrapper[4776]: I1011 10:28:30.178108 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-serving-cert\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.178206 master-2 kubenswrapper[4776]: I1011 10:28:30.178192 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-trusted-ca-bundle\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.178292 master-2 kubenswrapper[4776]: I1011 10:28:30.178277 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-encryption-config\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.178419 master-2 kubenswrapper[4776]: I1011 10:28:30.178405 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-config\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.178525 master-2 kubenswrapper[4776]: I1011 10:28:30.178511 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit-dir\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279721 master-2 kubenswrapper[4776]: I1011 10:28:30.279646 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-config\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279721 master-2 kubenswrapper[4776]: I1011 10:28:30.279710 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit-dir\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279764 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-client\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279805 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-serving-ca\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279821 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxmhq\" (UniqueName: \"kubernetes.io/projected/9bf5fcc5-d60e-45da-976d-56ac881274f1-kube-api-access-lxmhq\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279867 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-image-import-ca\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279911 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-node-pullsecrets\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279928 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-serving-cert\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.279944 master-2 kubenswrapper[4776]: I1011 10:28:30.279947 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-trusted-ca-bundle\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.280242 master-2 kubenswrapper[4776]: I1011 10:28:30.279964 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-encryption-config\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.280693 master-2 kubenswrapper[4776]: I1011 10:28:30.280637 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-config\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.280764 master-2 kubenswrapper[4776]: I1011 10:28:30.280751 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-node-pullsecrets\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.280829 master-2 kubenswrapper[4776]: E1011 10:28:30.280656 4776 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:30.280957 master-2 kubenswrapper[4776]: E1011 10:28:30.280942 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit podName:9bf5fcc5-d60e-45da-976d-56ac881274f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:30.780921838 +0000 UTC m=+145.565348547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit") pod "apiserver-796c687c6d-k46j4" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1") : configmap "audit-0" not found Oct 11 10:28:30.281215 master-2 kubenswrapper[4776]: I1011 10:28:30.281191 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-image-import-ca\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.281697 master-2 kubenswrapper[4776]: I1011 10:28:30.281660 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit-dir\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.281795 master-2 kubenswrapper[4776]: I1011 10:28:30.281738 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-serving-ca\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.283341 master-2 kubenswrapper[4776]: I1011 10:28:30.283300 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-trusted-ca-bundle\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.295703 master-2 kubenswrapper[4776]: I1011 10:28:30.287760 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-client\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.295703 master-2 kubenswrapper[4776]: I1011 10:28:30.287887 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-encryption-config\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.295703 master-2 kubenswrapper[4776]: I1011 10:28:30.292148 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-serving-cert\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.296779 master-2 kubenswrapper[4776]: I1011 10:28:30.296397 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxmhq\" (UniqueName: \"kubernetes.io/projected/9bf5fcc5-d60e-45da-976d-56ac881274f1-kube-api-access-lxmhq\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.619947 master-2 kubenswrapper[4776]: I1011 10:28:30.619885 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-llh2g"] Oct 11 10:28:30.620477 master-2 kubenswrapper[4776]: I1011 10:28:30.620440 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.624059 master-2 kubenswrapper[4776]: I1011 10:28:30.624007 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:28:30.624311 master-2 kubenswrapper[4776]: I1011 10:28:30.624280 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:28:30.624465 master-2 kubenswrapper[4776]: I1011 10:28:30.624439 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:28:30.624836 master-2 kubenswrapper[4776]: I1011 10:28:30.624808 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:28:30.624949 master-2 kubenswrapper[4776]: I1011 10:28:30.624923 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:28:30.625033 master-2 kubenswrapper[4776]: I1011 10:28:30.624991 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:28:30.632960 master-2 kubenswrapper[4776]: I1011 10:28:30.632898 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-llh2g"] Oct 11 10:28:30.646397 master-2 kubenswrapper[4776]: I1011 10:28:30.646297 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:30.685287 master-2 kubenswrapper[4776]: I1011 10:28:30.685216 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b839f3-9031-49c2-87a5-630975c7e14c-host\") pod \"02b839f3-9031-49c2-87a5-630975c7e14c\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " Oct 11 10:28:30.685601 master-2 kubenswrapper[4776]: I1011 10:28:30.685325 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj76g\" (UniqueName: \"kubernetes.io/projected/02b839f3-9031-49c2-87a5-630975c7e14c-kube-api-access-xj76g\") pod \"02b839f3-9031-49c2-87a5-630975c7e14c\" (UID: \"02b839f3-9031-49c2-87a5-630975c7e14c\") " Oct 11 10:28:30.685601 master-2 kubenswrapper[4776]: I1011 10:28:30.685331 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02b839f3-9031-49c2-87a5-630975c7e14c-host" (OuterVolumeSpecName: "host") pod "02b839f3-9031-49c2-87a5-630975c7e14c" (UID: "02b839f3-9031-49c2-87a5-630975c7e14c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:28:30.685734 master-2 kubenswrapper[4776]: I1011 10:28:30.685614 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.685734 master-2 kubenswrapper[4776]: I1011 10:28:30.685640 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.686097 master-2 kubenswrapper[4776]: I1011 10:28:30.686047 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.686153 master-2 kubenswrapper[4776]: I1011 10:28:30.686107 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfjtx\" (UniqueName: \"kubernetes.io/projected/a4117af6-90eb-4a97-af54-06b199075a28-kube-api-access-wfjtx\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.686238 master-2 kubenswrapper[4776]: I1011 10:28:30.686209 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.686332 master-2 kubenswrapper[4776]: I1011 10:28:30.686309 4776 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b839f3-9031-49c2-87a5-630975c7e14c-host\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:30.688331 master-2 kubenswrapper[4776]: I1011 10:28:30.688290 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b839f3-9031-49c2-87a5-630975c7e14c-kube-api-access-xj76g" (OuterVolumeSpecName: "kube-api-access-xj76g") pod "02b839f3-9031-49c2-87a5-630975c7e14c" (UID: "02b839f3-9031-49c2-87a5-630975c7e14c"). InnerVolumeSpecName "kube-api-access-xj76g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:28:30.787603 master-2 kubenswrapper[4776]: I1011 10:28:30.787555 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: I1011 10:28:30.787652 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: I1011 10:28:30.787690 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjtx\" (UniqueName: \"kubernetes.io/projected/a4117af6-90eb-4a97-af54-06b199075a28-kube-api-access-wfjtx\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: E1011 10:28:30.787710 4776 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: E1011 10:28:30.787750 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: E1011 10:28:30.787769 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit podName:9bf5fcc5-d60e-45da-976d-56ac881274f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.787753383 +0000 UTC m=+146.572180092 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit") pod "apiserver-796c687c6d-k46j4" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1") : configmap "audit-0" not found Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: E1011 10:28:30.787781 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.287775804 +0000 UTC m=+146.072202503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "client-ca" not found Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: I1011 10:28:30.787717 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: E1011 10:28:30.787789 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Oct 11 10:28:30.787842 master-2 kubenswrapper[4776]: E1011 10:28:30.787810 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.287801254 +0000 UTC m=+146.072227963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "openshift-global-ca" not found Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: I1011 10:28:30.787940 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: I1011 10:28:30.787961 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: I1011 10:28:30.788011 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj76g\" (UniqueName: \"kubernetes.io/projected/02b839f3-9031-49c2-87a5-630975c7e14c-kube-api-access-xj76g\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: E1011 10:28:30.788080 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: E1011 10:28:30.788103 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.288095063 +0000 UTC m=+146.072521772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : secret "serving-cert" not found Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: E1011 10:28:30.788136 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Oct 11 10:28:30.788199 master-2 kubenswrapper[4776]: E1011 10:28:30.788160 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:31.288152444 +0000 UTC m=+146.072579153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "config" not found Oct 11 10:28:30.816651 master-2 kubenswrapper[4776]: I1011 10:28:30.816571 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjtx\" (UniqueName: \"kubernetes.io/projected/a4117af6-90eb-4a97-af54-06b199075a28-kube-api-access-wfjtx\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:31.294236 master-2 kubenswrapper[4776]: I1011 10:28:31.294166 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:31.294236 master-2 kubenswrapper[4776]: I1011 10:28:31.294222 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:31.294634 master-2 kubenswrapper[4776]: I1011 10:28:31.294324 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:31.294634 master-2 kubenswrapper[4776]: I1011 10:28:31.294342 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:31.294634 master-2 kubenswrapper[4776]: E1011 10:28:31.294455 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:31.294634 master-2 kubenswrapper[4776]: E1011 10:28:31.294480 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Oct 11 10:28:31.294634 master-2 kubenswrapper[4776]: E1011 10:28:31.294499 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.294484634 +0000 UTC m=+147.078911343 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : secret "serving-cert" not found Oct 11 10:28:31.294634 master-2 kubenswrapper[4776]: E1011 10:28:31.294619 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.294590747 +0000 UTC m=+147.079017486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "openshift-global-ca" not found Oct 11 10:28:31.295086 master-2 kubenswrapper[4776]: E1011 10:28:31.294717 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Oct 11 10:28:31.295086 master-2 kubenswrapper[4776]: E1011 10:28:31.294758 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.294745322 +0000 UTC m=+147.079172071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "config" not found Oct 11 10:28:31.295086 master-2 kubenswrapper[4776]: E1011 10:28:31.294808 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:31.295086 master-2 kubenswrapper[4776]: E1011 10:28:31.294843 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.294832114 +0000 UTC m=+147.079258863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "client-ca" not found Oct 11 10:28:31.615287 master-2 kubenswrapper[4776]: I1011 10:28:31.615183 4776 scope.go:117] "RemoveContainer" containerID="84ee2aa257fd47e52819ac5b509341f1915d01766245c38ccb1a2b7cae91293a" Oct 11 10:28:31.615287 master-2 kubenswrapper[4776]: I1011 10:28:31.615241 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-2-debug-gpmgw" Oct 11 10:28:31.731817 master-2 kubenswrapper[4776]: I1011 10:28:31.731754 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb"] Oct 11 10:28:31.732515 master-2 kubenswrapper[4776]: I1011 10:28:31.732479 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.744344 master-2 kubenswrapper[4776]: I1011 10:28:31.744256 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:28:31.744344 master-2 kubenswrapper[4776]: I1011 10:28:31.744318 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:28:31.744479 master-2 kubenswrapper[4776]: I1011 10:28:31.744378 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:28:31.744479 master-2 kubenswrapper[4776]: I1011 10:28:31.744318 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:28:31.744794 master-2 kubenswrapper[4776]: I1011 10:28:31.744738 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:28:31.746521 master-2 kubenswrapper[4776]: I1011 10:28:31.746465 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb"] Oct 11 10:28:31.804483 master-2 kubenswrapper[4776]: I1011 10:28:31.804389 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.804823 master-2 kubenswrapper[4776]: I1011 10:28:31.804573 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-config\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.804823 master-2 kubenswrapper[4776]: I1011 10:28:31.804643 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:31.804823 master-2 kubenswrapper[4776]: E1011 10:28:31.804746 4776 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:31.804823 master-2 kubenswrapper[4776]: I1011 10:28:31.804791 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.805330 master-2 kubenswrapper[4776]: E1011 10:28:31.804829 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit podName:9bf5fcc5-d60e-45da-976d-56ac881274f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:33.804811469 +0000 UTC m=+148.589238188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit") pod "apiserver-796c687c6d-k46j4" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1") : configmap "audit-0" not found Oct 11 10:28:31.805330 master-2 kubenswrapper[4776]: I1011 10:28:31.804900 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sqbz\" (UniqueName: \"kubernetes.io/projected/17bef070-1a9d-4090-b97a-7ce2c1c93b19-kube-api-access-9sqbz\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.906222 master-2 kubenswrapper[4776]: I1011 10:28:31.906151 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: I1011 10:28:31.906237 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-config\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: I1011 10:28:31.906351 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: E1011 10:28:31.906375 4776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: I1011 10:28:31.906408 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sqbz\" (UniqueName: \"kubernetes.io/projected/17bef070-1a9d-4090-b97a-7ce2c1c93b19-kube-api-access-9sqbz\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: E1011 10:28:31.906458 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.406436945 +0000 UTC m=+147.190863654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : secret "serving-cert" not found Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: E1011 10:28:31.906570 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:31.906758 master-2 kubenswrapper[4776]: E1011 10:28:31.906661 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:32.406639791 +0000 UTC m=+147.191066500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:28:31.907387 master-2 kubenswrapper[4776]: I1011 10:28:31.907336 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-config\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:31.927135 master-2 kubenswrapper[4776]: I1011 10:28:31.927093 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sqbz\" (UniqueName: \"kubernetes.io/projected/17bef070-1a9d-4090-b97a-7ce2c1c93b19-kube-api-access-9sqbz\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:32.062263 master-2 kubenswrapper[4776]: I1011 10:28:32.062207 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b839f3-9031-49c2-87a5-630975c7e14c" path="/var/lib/kubelet/pods/02b839f3-9031-49c2-87a5-630975c7e14c/volumes" Oct 11 10:28:32.310558 master-2 kubenswrapper[4776]: I1011 10:28:32.310427 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:32.310558 master-2 kubenswrapper[4776]: I1011 10:28:32.310485 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:32.311075 master-2 kubenswrapper[4776]: I1011 10:28:32.310624 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:32.311075 master-2 kubenswrapper[4776]: I1011 10:28:32.310654 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:32.311075 master-2 kubenswrapper[4776]: E1011 10:28:32.310815 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:32.311075 master-2 kubenswrapper[4776]: E1011 10:28:32.310866 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:34.310850711 +0000 UTC m=+149.095277420 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "client-ca" not found Oct 11 10:28:32.311075 master-2 kubenswrapper[4776]: E1011 10:28:32.310927 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:32.311075 master-2 kubenswrapper[4776]: E1011 10:28:32.311013 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:34.310995016 +0000 UTC m=+149.095421725 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : secret "serving-cert" not found Oct 11 10:28:32.311806 master-2 kubenswrapper[4776]: I1011 10:28:32.311761 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:32.311993 master-2 kubenswrapper[4776]: I1011 10:28:32.311938 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:32.412118 master-2 kubenswrapper[4776]: I1011 10:28:32.412024 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:32.412376 master-2 kubenswrapper[4776]: I1011 10:28:32.412163 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:32.412376 master-2 kubenswrapper[4776]: E1011 10:28:32.412309 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:32.412376 master-2 kubenswrapper[4776]: E1011 10:28:32.412342 4776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:32.412376 master-2 kubenswrapper[4776]: E1011 10:28:32.412368 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:33.412354394 +0000 UTC m=+148.196781103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:28:32.412586 master-2 kubenswrapper[4776]: E1011 10:28:32.412419 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:33.412386705 +0000 UTC m=+148.196813434 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : secret "serving-cert" not found Oct 11 10:28:32.614765 master-2 kubenswrapper[4776]: I1011 10:28:32.613983 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:32.616856 master-2 kubenswrapper[4776]: I1011 10:28:32.616828 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 11 10:28:32.620360 master-2 kubenswrapper[4776]: I1011 10:28:32.620175 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5mn8b" event={"ID":"18ca0678-0b0d-4d5d-bc50-a0a098301f38","Type":"ContainerStarted","Data":"ae98d45df9584e1ebff96e0b7a9a74984b149159c94abf567838341fa680617e"} Oct 11 10:28:32.626754 master-2 kubenswrapper[4776]: I1011 10:28:32.626723 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 11 10:28:32.635245 master-2 kubenswrapper[4776]: I1011 10:28:32.635180 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-5mn8b" podStartSLOduration=5.950253988 podStartE2EDuration="28.635164269s" podCreationTimestamp="2025-10-11 10:28:04 +0000 UTC" firstStartedPulling="2025-10-11 10:28:05.832383232 +0000 UTC m=+120.616809941" lastFinishedPulling="2025-10-11 10:28:28.517293513 +0000 UTC m=+143.301720222" observedRunningTime="2025-10-11 10:28:32.633518413 +0000 UTC m=+147.417945172" watchObservedRunningTime="2025-10-11 10:28:32.635164269 +0000 UTC m=+147.419590978" Oct 11 10:28:32.640231 master-2 kubenswrapper[4776]: I1011 10:28:32.640181 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plqrv\" (UniqueName: \"kubernetes.io/projected/f6543c6f-6f31-431e-9327-60c8cfd70c7e-kube-api-access-plqrv\") pod \"network-check-target-jdkgd\" (UID: \"f6543c6f-6f31-431e-9327-60c8cfd70c7e\") " pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:32.678231 master-2 kubenswrapper[4776]: I1011 10:28:32.678151 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:32.846916 master-2 kubenswrapper[4776]: I1011 10:28:32.846438 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-jdkgd"] Oct 11 10:28:32.854007 master-2 kubenswrapper[4776]: W1011 10:28:32.853938 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6543c6f_6f31_431e_9327_60c8cfd70c7e.slice/crio-84e81236f7928a2ee4e2cf2be3beb7780aa0eee7727e21e21cd76476b8426999 WatchSource:0}: Error finding container 84e81236f7928a2ee4e2cf2be3beb7780aa0eee7727e21e21cd76476b8426999: Status 404 returned error can't find the container with id 84e81236f7928a2ee4e2cf2be3beb7780aa0eee7727e21e21cd76476b8426999 Oct 11 10:28:33.305769 master-2 kubenswrapper[4776]: I1011 10:28:33.305667 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-llh2g"] Oct 11 10:28:33.306003 master-2 kubenswrapper[4776]: E1011 10:28:33.305892 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" podUID="a4117af6-90eb-4a97-af54-06b199075a28" Oct 11 10:28:33.425488 master-2 kubenswrapper[4776]: I1011 10:28:33.425364 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:33.425488 master-2 kubenswrapper[4776]: E1011 10:28:33.425450 4776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:33.425801 master-2 kubenswrapper[4776]: E1011 10:28:33.425509 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:35.425495527 +0000 UTC m=+150.209922236 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : secret "serving-cert" not found Oct 11 10:28:33.425856 master-2 kubenswrapper[4776]: I1011 10:28:33.425797 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:33.425901 master-2 kubenswrapper[4776]: E1011 10:28:33.425890 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:33.426100 master-2 kubenswrapper[4776]: E1011 10:28:33.426048 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:35.426019083 +0000 UTC m=+150.210445832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:28:33.627963 master-2 kubenswrapper[4776]: I1011 10:28:33.627779 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:33.627963 master-2 kubenswrapper[4776]: I1011 10:28:33.627764 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-jdkgd" event={"ID":"f6543c6f-6f31-431e-9327-60c8cfd70c7e","Type":"ContainerStarted","Data":"84e81236f7928a2ee4e2cf2be3beb7780aa0eee7727e21e21cd76476b8426999"} Oct 11 10:28:33.637074 master-2 kubenswrapper[4776]: I1011 10:28:33.637030 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:33.731026 master-2 kubenswrapper[4776]: I1011 10:28:33.729883 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") pod \"a4117af6-90eb-4a97-af54-06b199075a28\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " Oct 11 10:28:33.731026 master-2 kubenswrapper[4776]: I1011 10:28:33.729992 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfjtx\" (UniqueName: \"kubernetes.io/projected/a4117af6-90eb-4a97-af54-06b199075a28-kube-api-access-wfjtx\") pod \"a4117af6-90eb-4a97-af54-06b199075a28\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " Oct 11 10:28:33.731026 master-2 kubenswrapper[4776]: I1011 10:28:33.730046 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") pod \"a4117af6-90eb-4a97-af54-06b199075a28\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " Oct 11 10:28:33.731026 master-2 kubenswrapper[4776]: I1011 10:28:33.730406 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config" (OuterVolumeSpecName: "config") pod "a4117af6-90eb-4a97-af54-06b199075a28" (UID: "a4117af6-90eb-4a97-af54-06b199075a28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:33.731026 master-2 kubenswrapper[4776]: I1011 10:28:33.730838 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:33.732378 master-2 kubenswrapper[4776]: I1011 10:28:33.732292 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a4117af6-90eb-4a97-af54-06b199075a28" (UID: "a4117af6-90eb-4a97-af54-06b199075a28"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:33.734293 master-2 kubenswrapper[4776]: I1011 10:28:33.734230 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4117af6-90eb-4a97-af54-06b199075a28-kube-api-access-wfjtx" (OuterVolumeSpecName: "kube-api-access-wfjtx") pod "a4117af6-90eb-4a97-af54-06b199075a28" (UID: "a4117af6-90eb-4a97-af54-06b199075a28"). InnerVolumeSpecName "kube-api-access-wfjtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:28:33.832069 master-2 kubenswrapper[4776]: I1011 10:28:33.832007 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit\") pod \"apiserver-796c687c6d-k46j4\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:33.832625 master-2 kubenswrapper[4776]: I1011 10:28:33.832606 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfjtx\" (UniqueName: \"kubernetes.io/projected/a4117af6-90eb-4a97-af54-06b199075a28-kube-api-access-wfjtx\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:33.832738 master-2 kubenswrapper[4776]: I1011 10:28:33.832723 4776 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-proxy-ca-bundles\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:33.833007 master-2 kubenswrapper[4776]: E1011 10:28:33.832936 4776 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 11 10:28:33.833106 master-2 kubenswrapper[4776]: E1011 10:28:33.833083 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit podName:9bf5fcc5-d60e-45da-976d-56ac881274f1 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:37.833054144 +0000 UTC m=+152.617480843 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit") pod "apiserver-796c687c6d-k46j4" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1") : configmap "audit-0" not found Oct 11 10:28:33.849645 master-2 kubenswrapper[4776]: I1011 10:28:33.849592 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-k46j4"] Oct 11 10:28:33.850790 master-2 kubenswrapper[4776]: E1011 10:28:33.850765 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-796c687c6d-k46j4" podUID="9bf5fcc5-d60e-45da-976d-56ac881274f1" Oct 11 10:28:34.338362 master-2 kubenswrapper[4776]: I1011 10:28:34.338267 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:34.338758 master-2 kubenswrapper[4776]: I1011 10:28:34.338739 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert\") pod \"controller-manager-5d9b59775c-llh2g\" (UID: \"a4117af6-90eb-4a97-af54-06b199075a28\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:34.339042 master-2 kubenswrapper[4776]: E1011 10:28:34.339007 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:34.339104 master-2 kubenswrapper[4776]: E1011 10:28:34.339082 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:38.339063705 +0000 UTC m=+153.123490414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : secret "serving-cert" not found Oct 11 10:28:34.339401 master-2 kubenswrapper[4776]: E1011 10:28:34.339332 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:34.339536 master-2 kubenswrapper[4776]: E1011 10:28:34.339500 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca podName:a4117af6-90eb-4a97-af54-06b199075a28 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:38.339466756 +0000 UTC m=+153.123893465 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca") pod "controller-manager-5d9b59775c-llh2g" (UID: "a4117af6-90eb-4a97-af54-06b199075a28") : configmap "client-ca" not found Oct 11 10:28:34.633563 master-2 kubenswrapper[4776]: I1011 10:28:34.633394 4776 generic.go:334] "Generic (PLEG): container finished" podID="e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc" containerID="79dc22f0a550f7a03fdfde0714643e7ddafbfdc868d34604a3c34fe38c3b91d6" exitCode=0 Oct 11 10:28:34.633563 master-2 kubenswrapper[4776]: I1011 10:28:34.633473 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerDied","Data":"79dc22f0a550f7a03fdfde0714643e7ddafbfdc868d34604a3c34fe38c3b91d6"} Oct 11 10:28:34.634393 master-2 kubenswrapper[4776]: I1011 10:28:34.634336 4776 scope.go:117] "RemoveContainer" containerID="79dc22f0a550f7a03fdfde0714643e7ddafbfdc868d34604a3c34fe38c3b91d6" Oct 11 10:28:34.636888 master-2 kubenswrapper[4776]: I1011 10:28:34.636843 4776 generic.go:334] "Generic (PLEG): container finished" podID="e540333c-4b4d-439e-a82a-cd3a97c95a43" containerID="0ba5d510196688f6b97d8a36964cc97a744fb54a3c5e03a38ad0b42712671103" exitCode=0 Oct 11 10:28:34.636978 master-2 kubenswrapper[4776]: I1011 10:28:34.636920 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" event={"ID":"e540333c-4b4d-439e-a82a-cd3a97c95a43","Type":"ContainerDied","Data":"0ba5d510196688f6b97d8a36964cc97a744fb54a3c5e03a38ad0b42712671103"} Oct 11 10:28:34.636978 master-2 kubenswrapper[4776]: I1011 10:28:34.636972 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:34.637069 master-2 kubenswrapper[4776]: I1011 10:28:34.637038 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-llh2g" Oct 11 10:28:34.637705 master-2 kubenswrapper[4776]: I1011 10:28:34.637686 4776 scope.go:117] "RemoveContainer" containerID="0ba5d510196688f6b97d8a36964cc97a744fb54a3c5e03a38ad0b42712671103" Oct 11 10:28:34.647013 master-2 kubenswrapper[4776]: I1011 10:28:34.646986 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:34.707448 master-2 kubenswrapper[4776]: I1011 10:28:34.707069 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-llh2g"] Oct 11 10:28:34.708817 master-2 kubenswrapper[4776]: I1011 10:28:34.708781 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-llh2g"] Oct 11 10:28:34.710183 master-2 kubenswrapper[4776]: I1011 10:28:34.710150 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-546b64dc7b-pdhmc"] Oct 11 10:28:34.711053 master-2 kubenswrapper[4776]: I1011 10:28:34.710987 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.714135 master-2 kubenswrapper[4776]: I1011 10:28:34.714078 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:28:34.718324 master-2 kubenswrapper[4776]: I1011 10:28:34.715268 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:28:34.718324 master-2 kubenswrapper[4776]: I1011 10:28:34.715270 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:28:34.718324 master-2 kubenswrapper[4776]: I1011 10:28:34.715313 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:28:34.718324 master-2 kubenswrapper[4776]: I1011 10:28:34.715374 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:28:34.718324 master-2 kubenswrapper[4776]: I1011 10:28:34.717691 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-546b64dc7b-pdhmc"] Oct 11 10:28:34.723972 master-2 kubenswrapper[4776]: I1011 10:28:34.723908 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:28:34.744459 master-2 kubenswrapper[4776]: I1011 10:28:34.744390 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-serving-ca\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744459 master-2 kubenswrapper[4776]: I1011 10:28:34.744475 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit-dir\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744528 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-encryption-config\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744533 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744557 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxmhq\" (UniqueName: \"kubernetes.io/projected/9bf5fcc5-d60e-45da-976d-56ac881274f1-kube-api-access-lxmhq\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744606 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-node-pullsecrets\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744628 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-config\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744645 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-trusted-ca-bundle\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744695 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-client\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.744884 master-2 kubenswrapper[4776]: I1011 10:28:34.744714 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-image-import-ca\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.745252 master-2 kubenswrapper[4776]: I1011 10:28:34.744970 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:34.745252 master-2 kubenswrapper[4776]: I1011 10:28:34.745120 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-serving-cert\") pod \"9bf5fcc5-d60e-45da-976d-56ac881274f1\" (UID: \"9bf5fcc5-d60e-45da-976d-56ac881274f1\") " Oct 11 10:28:34.745252 master-2 kubenswrapper[4776]: I1011 10:28:34.745138 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:34.745381 master-2 kubenswrapper[4776]: I1011 10:28:34.745340 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.745429 master-2 kubenswrapper[4776]: I1011 10:28:34.745420 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.745485 master-2 kubenswrapper[4776]: I1011 10:28:34.745459 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-config\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.745530 master-2 kubenswrapper[4776]: I1011 10:28:34.745485 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-config" (OuterVolumeSpecName: "config") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:34.745530 master-2 kubenswrapper[4776]: I1011 10:28:34.745514 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:28:34.745530 master-2 kubenswrapper[4776]: I1011 10:28:34.745521 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-proxy-ca-bundles\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.745649 master-2 kubenswrapper[4776]: I1011 10:28:34.745558 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjjzl\" (UniqueName: \"kubernetes.io/projected/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-kube-api-access-tjjzl\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745667 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745697 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4117af6-90eb-4a97-af54-06b199075a28-client-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745708 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745717 4776 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9bf5fcc5-d60e-45da-976d-56ac881274f1-node-pullsecrets\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745726 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745735 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4117af6-90eb-4a97-af54-06b199075a28-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.745743 master-2 kubenswrapper[4776]: I1011 10:28:34.745747 4776 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-image-import-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.746966 master-2 kubenswrapper[4776]: I1011 10:28:34.746779 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:28:34.749726 master-2 kubenswrapper[4776]: I1011 10:28:34.748405 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:28:34.749726 master-2 kubenswrapper[4776]: I1011 10:28:34.748824 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bf5fcc5-d60e-45da-976d-56ac881274f1-kube-api-access-lxmhq" (OuterVolumeSpecName: "kube-api-access-lxmhq") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "kube-api-access-lxmhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:28:34.749726 master-2 kubenswrapper[4776]: I1011 10:28:34.749049 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:28:34.753018 master-2 kubenswrapper[4776]: I1011 10:28:34.752980 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "9bf5fcc5-d60e-45da-976d-56ac881274f1" (UID: "9bf5fcc5-d60e-45da-976d-56ac881274f1"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847025 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847142 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847175 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-config\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847248 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-proxy-ca-bundles\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847286 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjjzl\" (UniqueName: \"kubernetes.io/projected/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-kube-api-access-tjjzl\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847402 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847420 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847435 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847450 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9bf5fcc5-d60e-45da-976d-56ac881274f1-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: I1011 10:28:34.847464 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxmhq\" (UniqueName: \"kubernetes.io/projected/9bf5fcc5-d60e-45da-976d-56ac881274f1-kube-api-access-lxmhq\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: E1011 10:28:34.847906 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:34.847979 master-2 kubenswrapper[4776]: E1011 10:28:34.847962 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:35.347947098 +0000 UTC m=+150.132373807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : secret "serving-cert" not found Oct 11 10:28:34.848482 master-2 kubenswrapper[4776]: E1011 10:28:34.848122 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:34.848482 master-2 kubenswrapper[4776]: E1011 10:28:34.848149 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:35.348142483 +0000 UTC m=+150.132569192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:28:34.849380 master-2 kubenswrapper[4776]: I1011 10:28:34.849326 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-config\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.849380 master-2 kubenswrapper[4776]: I1011 10:28:34.849356 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-proxy-ca-bundles\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:34.872528 master-2 kubenswrapper[4776]: I1011 10:28:34.872444 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjjzl\" (UniqueName: \"kubernetes.io/projected/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-kube-api-access-tjjzl\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:35.353291 master-2 kubenswrapper[4776]: I1011 10:28:35.353212 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:35.353537 master-2 kubenswrapper[4776]: I1011 10:28:35.353344 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:35.353537 master-2 kubenswrapper[4776]: E1011 10:28:35.353506 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:35.353620 master-2 kubenswrapper[4776]: E1011 10:28:35.353570 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.353549757 +0000 UTC m=+151.137976486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:28:35.353763 master-2 kubenswrapper[4776]: E1011 10:28:35.353695 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:35.353871 master-2 kubenswrapper[4776]: E1011 10:28:35.353841 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:36.353811725 +0000 UTC m=+151.138238444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : secret "serving-cert" not found Oct 11 10:28:35.454553 master-2 kubenswrapper[4776]: I1011 10:28:35.454484 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:35.454735 master-2 kubenswrapper[4776]: I1011 10:28:35.454587 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:35.454789 master-2 kubenswrapper[4776]: E1011 10:28:35.454763 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:35.454837 master-2 kubenswrapper[4776]: E1011 10:28:35.454823 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:39.454808233 +0000 UTC m=+154.239234932 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:28:35.454947 master-2 kubenswrapper[4776]: E1011 10:28:35.454851 4776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:35.455068 master-2 kubenswrapper[4776]: E1011 10:28:35.455022 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:39.454988758 +0000 UTC m=+154.239415477 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : secret "serving-cert" not found Oct 11 10:28:35.642518 master-2 kubenswrapper[4776]: I1011 10:28:35.642394 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4" event={"ID":"e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc","Type":"ContainerStarted","Data":"f88349d5575db3cbd9b37db276b5c369862cf7f868981c67616f19244c7c612f"} Oct 11 10:28:35.643775 master-2 kubenswrapper[4776]: I1011 10:28:35.643744 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" event={"ID":"e540333c-4b4d-439e-a82a-cd3a97c95a43","Type":"ContainerStarted","Data":"89704c12769118c53c22d7f82d393e22678a4835f23d73f837dd13b143b58cd8"} Oct 11 10:28:35.643849 master-2 kubenswrapper[4776]: I1011 10:28:35.643785 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-796c687c6d-k46j4" Oct 11 10:28:35.689616 master-2 kubenswrapper[4776]: I1011 10:28:35.688146 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-k46j4"] Oct 11 10:28:35.693705 master-2 kubenswrapper[4776]: I1011 10:28:35.692367 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-796c687c6d-k46j4"] Oct 11 10:28:35.759210 master-2 kubenswrapper[4776]: I1011 10:28:35.758694 4776 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9bf5fcc5-d60e-45da-976d-56ac881274f1-audit\") on node \"master-2\" DevicePath \"\"" Oct 11 10:28:36.063477 master-2 kubenswrapper[4776]: I1011 10:28:36.063415 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bf5fcc5-d60e-45da-976d-56ac881274f1" path="/var/lib/kubelet/pods/9bf5fcc5-d60e-45da-976d-56ac881274f1/volumes" Oct 11 10:28:36.064095 master-2 kubenswrapper[4776]: I1011 10:28:36.063761 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4117af6-90eb-4a97-af54-06b199075a28" path="/var/lib/kubelet/pods/a4117af6-90eb-4a97-af54-06b199075a28/volumes" Oct 11 10:28:36.369365 master-2 kubenswrapper[4776]: I1011 10:28:36.369240 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:36.369365 master-2 kubenswrapper[4776]: I1011 10:28:36.369337 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:36.369555 master-2 kubenswrapper[4776]: E1011 10:28:36.369508 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:36.369601 master-2 kubenswrapper[4776]: E1011 10:28:36.369566 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:38.369547593 +0000 UTC m=+153.153974312 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:28:36.369927 master-2 kubenswrapper[4776]: E1011 10:28:36.369877 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:36.369984 master-2 kubenswrapper[4776]: E1011 10:28:36.369973 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:38.369954005 +0000 UTC m=+153.154380714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : secret "serving-cert" not found Oct 11 10:28:36.470713 master-2 kubenswrapper[4776]: I1011 10:28:36.470384 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:36.470713 master-2 kubenswrapper[4776]: I1011 10:28:36.470726 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:28:36.471049 master-2 kubenswrapper[4776]: I1011 10:28:36.470754 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:36.471049 master-2 kubenswrapper[4776]: E1011 10:28:36.470562 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:36.471049 master-2 kubenswrapper[4776]: E1011 10:28:36.470868 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.470845741 +0000 UTC m=+183.255272460 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-operator-tls" not found Oct 11 10:28:36.471049 master-2 kubenswrapper[4776]: E1011 10:28:36.470897 4776 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:36.471049 master-2 kubenswrapper[4776]: E1011 10:28:36.470954 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert podName:66dee5be-e631-462d-8a2c-51a2031a83a2 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.470939643 +0000 UTC m=+183.255366352 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert") pod "cluster-baremetal-operator-6c8fbf4498-wq4jf" (UID: "66dee5be-e631-462d-8a2c-51a2031a83a2") : secret "cluster-baremetal-webhook-server-cert" not found Oct 11 10:28:36.478420 master-2 kubenswrapper[4776]: I1011 10:28:36.478328 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b8dc5b8-3c48-4dba-9992-6e269ca133f1-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-cfvjb\" (UID: \"6b8dc5b8-3c48-4dba-9992-6e269ca133f1\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:36.565937 master-2 kubenswrapper[4776]: I1011 10:28:36.565760 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" Oct 11 10:28:36.572171 master-2 kubenswrapper[4776]: I1011 10:28:36.572131 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:28:36.572311 master-2 kubenswrapper[4776]: I1011 10:28:36.572183 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:36.572311 master-2 kubenswrapper[4776]: I1011 10:28:36.572229 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:28:36.572311 master-2 kubenswrapper[4776]: I1011 10:28:36.572269 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:36.572311 master-2 kubenswrapper[4776]: I1011 10:28:36.572294 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:28:36.572476 master-2 kubenswrapper[4776]: I1011 10:28:36.572322 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:28:36.572476 master-2 kubenswrapper[4776]: I1011 10:28:36.572369 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:28:36.572476 master-2 kubenswrapper[4776]: I1011 10:28:36.572400 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:28:36.572476 master-2 kubenswrapper[4776]: I1011 10:28:36.572445 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:28:36.572476 master-2 kubenswrapper[4776]: I1011 10:28:36.572471 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:36.572633 master-2 kubenswrapper[4776]: E1011 10:28:36.572469 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 11 10:28:36.572633 master-2 kubenswrapper[4776]: E1011 10:28:36.572570 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert podName:e3281eb7-fb96-4bae-8c55-b79728d426b0 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.572546929 +0000 UTC m=+183.356973738 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert") pod "catalog-operator-f966fb6f8-8gkqg" (UID: "e3281eb7-fb96-4bae-8c55-b79728d426b0") : secret "catalog-operator-serving-cert" not found Oct 11 10:28:36.572906 master-2 kubenswrapper[4776]: I1011 10:28:36.572497 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: I1011 10:28:36.572951 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: I1011 10:28:36.572986 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: I1011 10:28:36.573020 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: I1011 10:28:36.573049 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: I1011 10:28:36.573092 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: E1011 10:28:36.573103 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: E1011 10:28:36.573177 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert podName:d4354488-1b32-422d-bb06-767a952192a5 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.573158836 +0000 UTC m=+183.357585545 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert") pod "olm-operator-867f8475d9-8lf59" (UID: "d4354488-1b32-422d-bb06-767a952192a5") : secret "olm-operator-serving-cert" not found Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: I1011 10:28:36.573116 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: E1011 10:28:36.573582 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: E1011 10:28:36.573608 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs podName:64310b0b-bae1-4ad3-b106-6d59d47d29b2 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.573601779 +0000 UTC m=+183.358028488 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs") pod "multus-admission-controller-77b66fddc8-5r2t9" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2") : secret "multus-admission-controller-secret" not found Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: E1011 10:28:36.573705 4776 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 11 10:28:36.574045 master-2 kubenswrapper[4776]: E1011 10:28:36.573729 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert podName:e20ebc39-150b-472a-bb22-328d8f5db87b nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.573722562 +0000 UTC m=+183.358149271 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-xzntp" (UID: "e20ebc39-150b-472a-bb22-328d8f5db87b") : secret "package-server-manager-serving-cert" not found Oct 11 10:28:36.574578 master-2 kubenswrapper[4776]: E1011 10:28:36.574344 4776 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 11 10:28:36.574578 master-2 kubenswrapper[4776]: E1011 10:28:36.574448 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls podName:548333d7-2374-4c38-b4fd-45c2bee2ac4e nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.574417273 +0000 UTC m=+183.358844022 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-b88g6" (UID: "548333d7-2374-4c38-b4fd-45c2bee2ac4e") : secret "machine-api-operator-tls" not found Oct 11 10:28:36.574698 master-2 kubenswrapper[4776]: E1011 10:28:36.574566 4776 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:36.574785 master-2 kubenswrapper[4776]: E1011 10:28:36.574755 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls podName:7e860f23-9dae-4606-9426-0edec38a332f nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.574710721 +0000 UTC m=+183.359137490 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-bjntd" (UID: "7e860f23-9dae-4606-9426-0edec38a332f") : secret "control-plane-machine-set-operator-tls" not found Oct 11 10:28:36.574904 master-2 kubenswrapper[4776]: E1011 10:28:36.574862 4776 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 11 10:28:36.575099 master-2 kubenswrapper[4776]: E1011 10:28:36.574976 4776 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 11 10:28:36.575360 master-2 kubenswrapper[4776]: E1011 10:28:36.574989 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics podName:7652e0ca-2d18-48c7-80e0-f4a936038377 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.574954488 +0000 UTC m=+183.359381237 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-wsmdd" (UID: "7652e0ca-2d18-48c7-80e0-f4a936038377") : secret "marketplace-operator-metrics" not found Oct 11 10:28:36.575452 master-2 kubenswrapper[4776]: E1011 10:28:36.575399 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs podName:cbf33a7e-abea-411d-9a19-85cfe67debe3 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.575381021 +0000 UTC m=+183.359807740 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs") pod "multus-admission-controller-77b66fddc8-s5r5b" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3") : secret "multus-admission-controller-secret" not found Oct 11 10:28:36.575452 master-2 kubenswrapper[4776]: E1011 10:28:36.575075 4776 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:36.575539 master-2 kubenswrapper[4776]: E1011 10:28:36.575455 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls podName:dbaa6ca7-9865-42f6-8030-2decf702caa1 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.575444812 +0000 UTC m=+183.359871531 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-h8588" (UID: "dbaa6ca7-9865-42f6-8030-2decf702caa1") : secret "cluster-monitoring-operator-tls" not found Oct 11 10:28:36.575577 master-2 kubenswrapper[4776]: E1011 10:28:36.575547 4776 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 11 10:28:36.575577 master-2 kubenswrapper[4776]: E1011 10:28:36.575576 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls podName:e4536c84-d8f3-4808-bf8b-9b40695f46de nodeName:}" failed. No retries permitted until 2025-10-11 10:29:08.575567716 +0000 UTC m=+183.359994435 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls") pod "machine-config-operator-7b75469658-jtmwh" (UID: "e4536c84-d8f3-4808-bf8b-9b40695f46de") : secret "mco-proxy-tls" not found Oct 11 10:28:36.577405 master-2 kubenswrapper[4776]: I1011 10:28:36.577362 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c-metrics-tls\") pod \"ingress-operator-766ddf4575-wf7mj\" (UID: \"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:36.577536 master-2 kubenswrapper[4776]: I1011 10:28:36.577502 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/08b7d4e3-1682-4a3b-a757-84ded3a16764-machine-approver-tls\") pod \"machine-approver-7876f99457-h7hhv\" (UID: \"08b7d4e3-1682-4a3b-a757-84ded3a16764\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:36.580329 master-2 kubenswrapper[4776]: I1011 10:28:36.580286 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:36.580736 master-2 kubenswrapper[4776]: I1011 10:28:36.580638 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/893af718-1fec-4b8b-8349-d85f978f4140-metrics-tls\") pod \"dns-operator-7769d9677-wh775\" (UID: \"893af718-1fec-4b8b-8349-d85f978f4140\") " pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:36.580942 master-2 kubenswrapper[4776]: I1011 10:28:36.580888 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b562963f-7112-411a-a64c-3b8eba909c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-mwbsr\" (UID: \"b562963f-7112-411a-a64c-3b8eba909c59\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:36.584417 master-2 kubenswrapper[4776]: I1011 10:28:36.584374 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/b16a4f10-c724-43cf-acd4-b3f5aa575653-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-js8sj\" (UID: \"b16a4f10-c724-43cf-acd4-b3f5aa575653\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:36.590185 master-2 kubenswrapper[4776]: I1011 10:28:36.589255 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eba1e82e-9f3e-4273-836e-9407cc394b10-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-8d7xr\" (UID: \"eba1e82e-9f3e-4273-836e-9407cc394b10\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:36.625645 master-2 kubenswrapper[4776]: I1011 10:28:36.623246 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" Oct 11 10:28:36.652698 master-2 kubenswrapper[4776]: I1011 10:28:36.652590 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-jdkgd" event={"ID":"f6543c6f-6f31-431e-9327-60c8cfd70c7e","Type":"ContainerStarted","Data":"4f73e18df9c7f779acf2f55c8c41ee29b55b8adf693c1c5eb81eeb622e853772"} Oct 11 10:28:36.653381 master-2 kubenswrapper[4776]: I1011 10:28:36.653351 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:28:36.656220 master-2 kubenswrapper[4776]: I1011 10:28:36.655988 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" event={"ID":"08b7d4e3-1682-4a3b-a757-84ded3a16764","Type":"ContainerStarted","Data":"0add7d1bda760e1d5b101492c65e520fdafceed8322aea33238c621f47687a61"} Oct 11 10:28:36.711113 master-2 kubenswrapper[4776]: I1011 10:28:36.711062 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-7769d9677-wh775" Oct 11 10:28:36.746077 master-2 kubenswrapper[4776]: I1011 10:28:36.746018 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-jdkgd" podStartSLOduration=65.488807731 podStartE2EDuration="1m8.745999324s" podCreationTimestamp="2025-10-11 10:27:28 +0000 UTC" firstStartedPulling="2025-10-11 10:28:32.856730829 +0000 UTC m=+147.641157538" lastFinishedPulling="2025-10-11 10:28:36.113922422 +0000 UTC m=+150.898349131" observedRunningTime="2025-10-11 10:28:36.665545797 +0000 UTC m=+151.449972526" watchObservedRunningTime="2025-10-11 10:28:36.745999324 +0000 UTC m=+151.530426033" Oct 11 10:28:36.746324 master-2 kubenswrapper[4776]: I1011 10:28:36.746261 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb"] Oct 11 10:28:36.766569 master-2 kubenswrapper[4776]: W1011 10:28:36.758332 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b8dc5b8_3c48_4dba_9992_6e269ca133f1.slice/crio-188f1621be47ba092888f41a29f5d0a5260959698afa47b0b74f94fc571421c1 WatchSource:0}: Error finding container 188f1621be47ba092888f41a29f5d0a5260959698afa47b0b74f94fc571421c1: Status 404 returned error can't find the container with id 188f1621be47ba092888f41a29f5d0a5260959698afa47b0b74f94fc571421c1 Oct 11 10:28:36.792178 master-2 kubenswrapper[4776]: I1011 10:28:36.792115 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" Oct 11 10:28:36.808972 master-2 kubenswrapper[4776]: I1011 10:28:36.808513 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" Oct 11 10:28:36.823956 master-2 kubenswrapper[4776]: I1011 10:28:36.822404 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" Oct 11 10:28:36.883660 master-2 kubenswrapper[4776]: I1011 10:28:36.883609 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" Oct 11 10:28:36.895336 master-2 kubenswrapper[4776]: I1011 10:28:36.895295 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-7769d9677-wh775"] Oct 11 10:28:37.006089 master-2 kubenswrapper[4776]: I1011 10:28:37.006037 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj"] Oct 11 10:28:37.019538 master-2 kubenswrapper[4776]: W1011 10:28:37.019497 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb16a4f10_c724_43cf_acd4_b3f5aa575653.slice/crio-5710afd5394e3b53c5b13f8833cf4b5c9cc77c1ac6fd041e5f45c14fa911d31d WatchSource:0}: Error finding container 5710afd5394e3b53c5b13f8833cf4b5c9cc77c1ac6fd041e5f45c14fa911d31d: Status 404 returned error can't find the container with id 5710afd5394e3b53c5b13f8833cf4b5c9cc77c1ac6fd041e5f45c14fa911d31d Oct 11 10:28:37.049030 master-2 kubenswrapper[4776]: I1011 10:28:37.048974 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr"] Oct 11 10:28:37.054882 master-2 kubenswrapper[4776]: W1011 10:28:37.054835 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb562963f_7112_411a_a64c_3b8eba909c59.slice/crio-82e86843429c6909f09d49890b4e92330d30aff06c836b9fc7a117266e887235 WatchSource:0}: Error finding container 82e86843429c6909f09d49890b4e92330d30aff06c836b9fc7a117266e887235: Status 404 returned error can't find the container with id 82e86843429c6909f09d49890b4e92330d30aff06c836b9fc7a117266e887235 Oct 11 10:28:37.061925 master-2 kubenswrapper[4776]: I1011 10:28:37.061810 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj"] Oct 11 10:28:37.070024 master-2 kubenswrapper[4776]: W1011 10:28:37.069963 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ebe6a0e_5a45_4c92_bbb5_77f3ec1fe55c.slice/crio-4d04bee2a3fa895107e6ee4c7e0c78bb925b7271658dc663ec4f5779ff21a55b WatchSource:0}: Error finding container 4d04bee2a3fa895107e6ee4c7e0c78bb925b7271658dc663ec4f5779ff21a55b: Status 404 returned error can't find the container with id 4d04bee2a3fa895107e6ee4c7e0c78bb925b7271658dc663ec4f5779ff21a55b Oct 11 10:28:37.085488 master-2 kubenswrapper[4776]: I1011 10:28:37.085408 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr"] Oct 11 10:28:37.093644 master-2 kubenswrapper[4776]: W1011 10:28:37.093592 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeba1e82e_9f3e_4273_836e_9407cc394b10.slice/crio-288eff82e0422001024cfeac653461a2840d9f7875d884b876cdd25f039cb29c WatchSource:0}: Error finding container 288eff82e0422001024cfeac653461a2840d9f7875d884b876cdd25f039cb29c: Status 404 returned error can't find the container with id 288eff82e0422001024cfeac653461a2840d9f7875d884b876cdd25f039cb29c Oct 11 10:28:37.603761 master-2 kubenswrapper[4776]: I1011 10:28:37.603265 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-wmcqt"] Oct 11 10:28:37.604394 master-2 kubenswrapper[4776]: I1011 10:28:37.604358 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.607385 master-2 kubenswrapper[4776]: I1011 10:28:37.607257 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:28:37.608134 master-2 kubenswrapper[4776]: I1011 10:28:37.608079 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:28:37.608417 master-2 kubenswrapper[4776]: I1011 10:28:37.608386 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:28:37.608761 master-2 kubenswrapper[4776]: I1011 10:28:37.608626 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:28:37.609097 master-2 kubenswrapper[4776]: I1011 10:28:37.609031 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:28:37.610614 master-2 kubenswrapper[4776]: I1011 10:28:37.610401 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:28:37.610614 master-2 kubenswrapper[4776]: I1011 10:28:37.610407 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:28:37.610614 master-2 kubenswrapper[4776]: I1011 10:28:37.610486 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:28:37.612217 master-2 kubenswrapper[4776]: I1011 10:28:37.610884 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:28:37.615531 master-2 kubenswrapper[4776]: I1011 10:28:37.615391 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-wmcqt"] Oct 11 10:28:37.617781 master-2 kubenswrapper[4776]: I1011 10:28:37.617693 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:28:37.661900 master-2 kubenswrapper[4776]: I1011 10:28:37.661840 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerStarted","Data":"4d04bee2a3fa895107e6ee4c7e0c78bb925b7271658dc663ec4f5779ff21a55b"} Oct 11 10:28:37.663510 master-2 kubenswrapper[4776]: I1011 10:28:37.663463 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" event={"ID":"6b8dc5b8-3c48-4dba-9992-6e269ca133f1","Type":"ContainerStarted","Data":"a2f213e229cd515098c17350a5db040adcc59dc05e9b25b48ab6c73159f7a768"} Oct 11 10:28:37.663587 master-2 kubenswrapper[4776]: I1011 10:28:37.663521 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" event={"ID":"6b8dc5b8-3c48-4dba-9992-6e269ca133f1","Type":"ContainerStarted","Data":"188f1621be47ba092888f41a29f5d0a5260959698afa47b0b74f94fc571421c1"} Oct 11 10:28:37.664787 master-2 kubenswrapper[4776]: I1011 10:28:37.664751 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" event={"ID":"08b7d4e3-1682-4a3b-a757-84ded3a16764","Type":"ContainerStarted","Data":"267425053e21eaecb5876aa58130583543e28c9e0ceacc764ad483ef9c1a09d8"} Oct 11 10:28:37.666023 master-2 kubenswrapper[4776]: I1011 10:28:37.665974 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" event={"ID":"b16a4f10-c724-43cf-acd4-b3f5aa575653","Type":"ContainerStarted","Data":"5710afd5394e3b53c5b13f8833cf4b5c9cc77c1ac6fd041e5f45c14fa911d31d"} Oct 11 10:28:37.666958 master-2 kubenswrapper[4776]: I1011 10:28:37.666918 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-7769d9677-wh775" event={"ID":"893af718-1fec-4b8b-8349-d85f978f4140","Type":"ContainerStarted","Data":"3ca9a32abe3eeaa78ad3b955ed2a9db43a464c56268719e20d096ebb23a8bc9c"} Oct 11 10:28:37.668405 master-2 kubenswrapper[4776]: I1011 10:28:37.668375 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" event={"ID":"eba1e82e-9f3e-4273-836e-9407cc394b10","Type":"ContainerStarted","Data":"8b3ae054e2080d8747bb5d4193692c2c66a2b67a445b4b6f41b27d918beea8e3"} Oct 11 10:28:37.668489 master-2 kubenswrapper[4776]: I1011 10:28:37.668411 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" event={"ID":"eba1e82e-9f3e-4273-836e-9407cc394b10","Type":"ContainerStarted","Data":"288eff82e0422001024cfeac653461a2840d9f7875d884b876cdd25f039cb29c"} Oct 11 10:28:37.669635 master-2 kubenswrapper[4776]: I1011 10:28:37.669606 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" event={"ID":"b562963f-7112-411a-a64c-3b8eba909c59","Type":"ContainerStarted","Data":"82e86843429c6909f09d49890b4e92330d30aff06c836b9fc7a117266e887235"} Oct 11 10:28:37.701371 master-2 kubenswrapper[4776]: I1011 10:28:37.701319 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-config\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701568 master-2 kubenswrapper[4776]: I1011 10:28:37.701379 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc5gq\" (UniqueName: \"kubernetes.io/projected/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-kube-api-access-wc5gq\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701568 master-2 kubenswrapper[4776]: I1011 10:28:37.701445 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-client\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701568 master-2 kubenswrapper[4776]: I1011 10:28:37.701488 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-trusted-ca-bundle\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701568 master-2 kubenswrapper[4776]: I1011 10:28:37.701513 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701742 master-2 kubenswrapper[4776]: I1011 10:28:37.701626 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-image-import-ca\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701742 master-2 kubenswrapper[4776]: I1011 10:28:37.701656 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-node-pullsecrets\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701742 master-2 kubenswrapper[4776]: I1011 10:28:37.701721 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-serving-cert\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701742 master-2 kubenswrapper[4776]: I1011 10:28:37.701739 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit-dir\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701851 master-2 kubenswrapper[4776]: I1011 10:28:37.701761 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-serving-ca\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.701851 master-2 kubenswrapper[4776]: I1011 10:28:37.701781 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-encryption-config\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802539 master-2 kubenswrapper[4776]: I1011 10:28:37.802454 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-node-pullsecrets\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802742 master-2 kubenswrapper[4776]: I1011 10:28:37.802552 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-serving-cert\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802742 master-2 kubenswrapper[4776]: I1011 10:28:37.802575 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit-dir\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802742 master-2 kubenswrapper[4776]: I1011 10:28:37.802598 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-serving-ca\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802742 master-2 kubenswrapper[4776]: I1011 10:28:37.802617 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-encryption-config\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802742 master-2 kubenswrapper[4776]: I1011 10:28:37.802663 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-config\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802742 master-2 kubenswrapper[4776]: I1011 10:28:37.802741 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc5gq\" (UniqueName: \"kubernetes.io/projected/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-kube-api-access-wc5gq\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802984 master-2 kubenswrapper[4776]: I1011 10:28:37.802795 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-client\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802984 master-2 kubenswrapper[4776]: I1011 10:28:37.802791 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-node-pullsecrets\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802984 master-2 kubenswrapper[4776]: I1011 10:28:37.802848 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-trusted-ca-bundle\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802984 master-2 kubenswrapper[4776]: I1011 10:28:37.802872 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.802984 master-2 kubenswrapper[4776]: I1011 10:28:37.802937 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-image-import-ca\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.804372 master-2 kubenswrapper[4776]: I1011 10:28:37.804348 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-image-import-ca\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.804440 master-2 kubenswrapper[4776]: I1011 10:28:37.804421 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-config\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.806516 master-2 kubenswrapper[4776]: I1011 10:28:37.806437 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-trusted-ca-bundle\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.807917 master-2 kubenswrapper[4776]: I1011 10:28:37.807752 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.808844 master-2 kubenswrapper[4776]: I1011 10:28:37.808275 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-serving-ca\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.808844 master-2 kubenswrapper[4776]: I1011 10:28:37.808329 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit-dir\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.811101 master-2 kubenswrapper[4776]: I1011 10:28:37.810961 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-client\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.812053 master-2 kubenswrapper[4776]: I1011 10:28:37.812017 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-serving-cert\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.824440 master-2 kubenswrapper[4776]: I1011 10:28:37.824402 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-encryption-config\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.832548 master-2 kubenswrapper[4776]: I1011 10:28:37.832508 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc5gq\" (UniqueName: \"kubernetes.io/projected/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-kube-api-access-wc5gq\") pod \"apiserver-555f658fd6-wmcqt\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:37.924844 master-2 kubenswrapper[4776]: I1011 10:28:37.924025 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:38.133213 master-2 kubenswrapper[4776]: I1011 10:28:38.133172 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-wmcqt"] Oct 11 10:28:38.408608 master-2 kubenswrapper[4776]: I1011 10:28:38.408186 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:38.408796 master-2 kubenswrapper[4776]: I1011 10:28:38.408666 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:38.408796 master-2 kubenswrapper[4776]: E1011 10:28:38.408407 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:38.408796 master-2 kubenswrapper[4776]: E1011 10:28:38.408788 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:42.408766233 +0000 UTC m=+157.193192942 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : secret "serving-cert" not found Oct 11 10:28:38.408904 master-2 kubenswrapper[4776]: E1011 10:28:38.408834 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:38.408904 master-2 kubenswrapper[4776]: E1011 10:28:38.408897 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:42.408882187 +0000 UTC m=+157.193308896 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:28:38.675602 master-2 kubenswrapper[4776]: I1011 10:28:38.675494 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerStarted","Data":"632a135875099c1d39a46b5212f4753eda648d4f1ce35df8cc0f167cab38ce86"} Oct 11 10:28:39.524654 master-2 kubenswrapper[4776]: I1011 10:28:39.524429 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:39.524654 master-2 kubenswrapper[4776]: E1011 10:28:39.524595 4776 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:39.524654 master-2 kubenswrapper[4776]: I1011 10:28:39.524638 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:39.524654 master-2 kubenswrapper[4776]: E1011 10:28:39.524661 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:47.524641056 +0000 UTC m=+162.309067765 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : secret "serving-cert" not found Oct 11 10:28:39.525531 master-2 kubenswrapper[4776]: E1011 10:28:39.524863 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:39.525531 master-2 kubenswrapper[4776]: E1011 10:28:39.524985 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:47.524953775 +0000 UTC m=+162.309380554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:28:42.462199 master-2 kubenswrapper[4776]: I1011 10:28:42.462120 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:42.462852 master-2 kubenswrapper[4776]: I1011 10:28:42.462311 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:42.462852 master-2 kubenswrapper[4776]: E1011 10:28:42.462316 4776 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 11 10:28:42.462852 master-2 kubenswrapper[4776]: E1011 10:28:42.462506 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:50.462487602 +0000 UTC m=+165.246914311 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : secret "serving-cert" not found Oct 11 10:28:42.462852 master-2 kubenswrapper[4776]: E1011 10:28:42.462358 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:42.462852 master-2 kubenswrapper[4776]: E1011 10:28:42.462657 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:50.462621906 +0000 UTC m=+165.247048605 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:28:43.436158 master-2 kubenswrapper[4776]: I1011 10:28:43.436103 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6"] Oct 11 10:28:43.436728 master-2 kubenswrapper[4776]: I1011 10:28:43.436712 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.439724 master-2 kubenswrapper[4776]: I1011 10:28:43.439664 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:28:43.440520 master-2 kubenswrapper[4776]: I1011 10:28:43.440492 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:28:43.440844 master-2 kubenswrapper[4776]: I1011 10:28:43.440820 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:28:43.441006 master-2 kubenswrapper[4776]: I1011 10:28:43.440982 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:28:43.441631 master-2 kubenswrapper[4776]: I1011 10:28:43.441583 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:28:43.441942 master-2 kubenswrapper[4776]: I1011 10:28:43.441889 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:28:43.441996 master-2 kubenswrapper[4776]: I1011 10:28:43.441945 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:28:43.442357 master-2 kubenswrapper[4776]: I1011 10:28:43.442337 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:28:43.461206 master-2 kubenswrapper[4776]: I1011 10:28:43.461179 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6"] Oct 11 10:28:43.577493 master-2 kubenswrapper[4776]: I1011 10:28:43.577442 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-etcd-serving-ca\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.577888 master-2 kubenswrapper[4776]: I1011 10:28:43.577508 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-trusted-ca-bundle\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.577888 master-2 kubenswrapper[4776]: I1011 10:28:43.577533 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.577888 master-2 kubenswrapper[4776]: I1011 10:28:43.577862 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e350b624-6581-4982-96f3-cd5c37256e02-audit-dir\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.578083 master-2 kubenswrapper[4776]: I1011 10:28:43.578045 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqlgq\" (UniqueName: \"kubernetes.io/projected/e350b624-6581-4982-96f3-cd5c37256e02-kube-api-access-tqlgq\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.578134 master-2 kubenswrapper[4776]: I1011 10:28:43.578110 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-etcd-client\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.578203 master-2 kubenswrapper[4776]: I1011 10:28:43.578176 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-audit-policies\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.578242 master-2 kubenswrapper[4776]: I1011 10:28:43.578211 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-encryption-config\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.679661 master-2 kubenswrapper[4776]: I1011 10:28:43.679590 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e350b624-6581-4982-96f3-cd5c37256e02-audit-dir\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.679872 master-2 kubenswrapper[4776]: I1011 10:28:43.679703 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqlgq\" (UniqueName: \"kubernetes.io/projected/e350b624-6581-4982-96f3-cd5c37256e02-kube-api-access-tqlgq\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.679872 master-2 kubenswrapper[4776]: I1011 10:28:43.679730 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-etcd-client\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.679872 master-2 kubenswrapper[4776]: I1011 10:28:43.679763 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-audit-policies\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.679872 master-2 kubenswrapper[4776]: I1011 10:28:43.679786 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-encryption-config\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.679872 master-2 kubenswrapper[4776]: I1011 10:28:43.679848 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-etcd-serving-ca\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.680103 master-2 kubenswrapper[4776]: I1011 10:28:43.679889 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-trusted-ca-bundle\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.680103 master-2 kubenswrapper[4776]: I1011 10:28:43.679942 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.680160 master-2 kubenswrapper[4776]: E1011 10:28:43.680139 4776 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Oct 11 10:28:43.680207 master-2 kubenswrapper[4776]: E1011 10:28:43.680190 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert podName:e350b624-6581-4982-96f3-cd5c37256e02 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:44.180175826 +0000 UTC m=+158.964602535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert") pod "apiserver-65b6f4d4c9-5wrz6" (UID: "e350b624-6581-4982-96f3-cd5c37256e02") : secret "serving-cert" not found Oct 11 10:28:43.681255 master-2 kubenswrapper[4776]: I1011 10:28:43.680931 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-etcd-serving-ca\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.681255 master-2 kubenswrapper[4776]: I1011 10:28:43.681066 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-trusted-ca-bundle\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.681799 master-2 kubenswrapper[4776]: I1011 10:28:43.681750 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-audit-policies\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.681977 master-2 kubenswrapper[4776]: I1011 10:28:43.681930 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e350b624-6581-4982-96f3-cd5c37256e02-audit-dir\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.684551 master-2 kubenswrapper[4776]: I1011 10:28:43.684510 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-etcd-client\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.684614 master-2 kubenswrapper[4776]: I1011 10:28:43.684565 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-encryption-config\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:43.696423 master-2 kubenswrapper[4776]: I1011 10:28:43.696355 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqlgq\" (UniqueName: \"kubernetes.io/projected/e350b624-6581-4982-96f3-cd5c37256e02-kube-api-access-tqlgq\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:44.185349 master-2 kubenswrapper[4776]: I1011 10:28:44.185286 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:44.185731 master-2 kubenswrapper[4776]: E1011 10:28:44.185425 4776 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Oct 11 10:28:44.185731 master-2 kubenswrapper[4776]: E1011 10:28:44.185495 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert podName:e350b624-6581-4982-96f3-cd5c37256e02 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:45.185477466 +0000 UTC m=+159.969904175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert") pod "apiserver-65b6f4d4c9-5wrz6" (UID: "e350b624-6581-4982-96f3-cd5c37256e02") : secret "serving-cert" not found Oct 11 10:28:45.195584 master-2 kubenswrapper[4776]: I1011 10:28:45.195521 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:45.196223 master-2 kubenswrapper[4776]: E1011 10:28:45.195826 4776 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Oct 11 10:28:45.196223 master-2 kubenswrapper[4776]: E1011 10:28:45.195884 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert podName:e350b624-6581-4982-96f3-cd5c37256e02 nodeName:}" failed. No retries permitted until 2025-10-11 10:28:47.195865861 +0000 UTC m=+161.980292570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert") pod "apiserver-65b6f4d4c9-5wrz6" (UID: "e350b624-6581-4982-96f3-cd5c37256e02") : secret "serving-cert" not found Oct 11 10:28:46.706418 master-2 kubenswrapper[4776]: I1011 10:28:46.706093 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerStarted","Data":"2eff4353493e1e27d8a85bd8e32e0408e179cf5370df38de2ded9a10d5e6c314"} Oct 11 10:28:47.224797 master-2 kubenswrapper[4776]: I1011 10:28:47.224734 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:47.228977 master-2 kubenswrapper[4776]: I1011 10:28:47.228943 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"apiserver-65b6f4d4c9-5wrz6\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:47.365326 master-2 kubenswrapper[4776]: I1011 10:28:47.365256 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:47.529329 master-2 kubenswrapper[4776]: I1011 10:28:47.529167 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:47.529550 master-2 kubenswrapper[4776]: I1011 10:28:47.529329 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:47.529550 master-2 kubenswrapper[4776]: E1011 10:28:47.529445 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:47.529550 master-2 kubenswrapper[4776]: E1011 10:28:47.529518 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:03.529498059 +0000 UTC m=+178.313924778 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:28:47.537365 master-2 kubenswrapper[4776]: I1011 10:28:47.537317 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:28:48.472711 master-2 kubenswrapper[4776]: I1011 10:28:48.472398 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sgvjd"] Oct 11 10:28:48.473872 master-2 kubenswrapper[4776]: I1011 10:28:48.473243 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.476354 master-2 kubenswrapper[4776]: I1011 10:28:48.476310 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 11 10:28:48.476505 master-2 kubenswrapper[4776]: I1011 10:28:48.476398 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 11 10:28:48.476717 master-2 kubenswrapper[4776]: I1011 10:28:48.476662 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 11 10:28:48.477062 master-2 kubenswrapper[4776]: I1011 10:28:48.477028 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 11 10:28:48.483731 master-2 kubenswrapper[4776]: I1011 10:28:48.483701 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sgvjd"] Oct 11 10:28:48.543398 master-2 kubenswrapper[4776]: I1011 10:28:48.543334 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-config-volume\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.543398 master-2 kubenswrapper[4776]: I1011 10:28:48.543378 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-metrics-tls\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.543619 master-2 kubenswrapper[4776]: I1011 10:28:48.543411 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksw9c\" (UniqueName: \"kubernetes.io/projected/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-kube-api-access-ksw9c\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.644417 master-2 kubenswrapper[4776]: I1011 10:28:48.644344 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-config-volume\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.644417 master-2 kubenswrapper[4776]: I1011 10:28:48.644401 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-metrics-tls\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.644644 master-2 kubenswrapper[4776]: I1011 10:28:48.644443 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksw9c\" (UniqueName: \"kubernetes.io/projected/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-kube-api-access-ksw9c\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.645292 master-2 kubenswrapper[4776]: I1011 10:28:48.645240 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-config-volume\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.650262 master-2 kubenswrapper[4776]: I1011 10:28:48.650217 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-metrics-tls\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.681280 master-2 kubenswrapper[4776]: I1011 10:28:48.681191 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksw9c\" (UniqueName: \"kubernetes.io/projected/e3f3ba3c-1d27-4529-9ae3-a61f88e50b62-kube-api-access-ksw9c\") pod \"dns-default-sgvjd\" (UID: \"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62\") " pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.714606 master-2 kubenswrapper[4776]: I1011 10:28:48.714284 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-7769d9677-wh775" event={"ID":"893af718-1fec-4b8b-8349-d85f978f4140","Type":"ContainerStarted","Data":"950901b87af0c91716dd6b0b32b00414910693e5066f298cd5ccb27d712bc959"} Oct 11 10:28:48.788988 master-2 kubenswrapper[4776]: I1011 10:28:48.788864 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-z9trl"] Oct 11 10:28:48.789188 master-2 kubenswrapper[4776]: I1011 10:28:48.789140 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:48.789486 master-2 kubenswrapper[4776]: I1011 10:28:48.789455 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.848387 master-2 kubenswrapper[4776]: I1011 10:28:48.848266 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxvgh\" (UniqueName: \"kubernetes.io/projected/0550ab10-d45d-4526-8551-c1ce0b232bbc-kube-api-access-bxvgh\") pod \"node-resolver-z9trl\" (UID: \"0550ab10-d45d-4526-8551-c1ce0b232bbc\") " pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.848387 master-2 kubenswrapper[4776]: I1011 10:28:48.848361 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0550ab10-d45d-4526-8551-c1ce0b232bbc-hosts-file\") pod \"node-resolver-z9trl\" (UID: \"0550ab10-d45d-4526-8551-c1ce0b232bbc\") " pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.950538 master-2 kubenswrapper[4776]: I1011 10:28:48.950423 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxvgh\" (UniqueName: \"kubernetes.io/projected/0550ab10-d45d-4526-8551-c1ce0b232bbc-kube-api-access-bxvgh\") pod \"node-resolver-z9trl\" (UID: \"0550ab10-d45d-4526-8551-c1ce0b232bbc\") " pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.950538 master-2 kubenswrapper[4776]: I1011 10:28:48.950524 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0550ab10-d45d-4526-8551-c1ce0b232bbc-hosts-file\") pod \"node-resolver-z9trl\" (UID: \"0550ab10-d45d-4526-8551-c1ce0b232bbc\") " pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.950978 master-2 kubenswrapper[4776]: I1011 10:28:48.950923 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0550ab10-d45d-4526-8551-c1ce0b232bbc-hosts-file\") pod \"node-resolver-z9trl\" (UID: \"0550ab10-d45d-4526-8551-c1ce0b232bbc\") " pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:48.982072 master-2 kubenswrapper[4776]: I1011 10:28:48.982009 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxvgh\" (UniqueName: \"kubernetes.io/projected/0550ab10-d45d-4526-8551-c1ce0b232bbc-kube-api-access-bxvgh\") pod \"node-resolver-z9trl\" (UID: \"0550ab10-d45d-4526-8551-c1ce0b232bbc\") " pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:49.123624 master-2 kubenswrapper[4776]: I1011 10:28:49.123449 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-z9trl" Oct 11 10:28:49.626022 master-2 kubenswrapper[4776]: E1011 10:28:49.625768 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" podUID="b7b07707-84bd-43a6-a43d-6680decaa210" Oct 11 10:28:49.698324 master-2 kubenswrapper[4776]: I1011 10:28:49.697990 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6"] Oct 11 10:28:49.713830 master-2 kubenswrapper[4776]: I1011 10:28:49.713760 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sgvjd"] Oct 11 10:28:49.714552 master-2 kubenswrapper[4776]: W1011 10:28:49.714342 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode350b624_6581_4982_96f3_cd5c37256e02.slice/crio-7123427ff4a739a7b15f9487edb2172df73189bcf0b6f9273cbe9a3faa4de58f WatchSource:0}: Error finding container 7123427ff4a739a7b15f9487edb2172df73189bcf0b6f9273cbe9a3faa4de58f: Status 404 returned error can't find the container with id 7123427ff4a739a7b15f9487edb2172df73189bcf0b6f9273cbe9a3faa4de58f Oct 11 10:28:49.737666 master-2 kubenswrapper[4776]: I1011 10:28:49.737621 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" event={"ID":"b16a4f10-c724-43cf-acd4-b3f5aa575653","Type":"ContainerStarted","Data":"c0e8f71d396fd27257db760a12b957d9766d7b8f4ea38505f65cfa745ea983cb"} Oct 11 10:28:49.741546 master-2 kubenswrapper[4776]: W1011 10:28:49.741485 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f3ba3c_1d27_4529_9ae3_a61f88e50b62.slice/crio-4fbd5baa22bf7168e39d0a10835a0030b5465676760f7471ce718b48974a41f5 WatchSource:0}: Error finding container 4fbd5baa22bf7168e39d0a10835a0030b5465676760f7471ce718b48974a41f5: Status 404 returned error can't find the container with id 4fbd5baa22bf7168e39d0a10835a0030b5465676760f7471ce718b48974a41f5 Oct 11 10:28:49.743765 master-2 kubenswrapper[4776]: I1011 10:28:49.743694 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerStarted","Data":"56e5041a6c1005b559504440c80e17a9a6cffc931c2704b5c5ae753ba7406a36"} Oct 11 10:28:49.746265 master-2 kubenswrapper[4776]: I1011 10:28:49.746217 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-z9trl" event={"ID":"0550ab10-d45d-4526-8551-c1ce0b232bbc","Type":"ContainerStarted","Data":"8e10b519a711a0feb5153b454a4fb4c5f5dd8d87baaef013a2af6d31f287dbc6"} Oct 11 10:28:49.747881 master-2 kubenswrapper[4776]: I1011 10:28:49.747841 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" event={"ID":"6b8dc5b8-3c48-4dba-9992-6e269ca133f1","Type":"ContainerStarted","Data":"18e94b730f9322c1d1497f21d03aa5ce221afb64bd7545bbd8eb547d8ca9d1f9"} Oct 11 10:28:49.753846 master-2 kubenswrapper[4776]: I1011 10:28:49.753782 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj" podStartSLOduration=131.953448169 podStartE2EDuration="2m22.753765919s" podCreationTimestamp="2025-10-11 10:26:27 +0000 UTC" firstStartedPulling="2025-10-11 10:28:37.021004422 +0000 UTC m=+151.805431131" lastFinishedPulling="2025-10-11 10:28:47.821322132 +0000 UTC m=+162.605748881" observedRunningTime="2025-10-11 10:28:49.752519564 +0000 UTC m=+164.536946273" watchObservedRunningTime="2025-10-11 10:28:49.753765919 +0000 UTC m=+164.538192628" Oct 11 10:28:49.787234 master-2 kubenswrapper[4776]: I1011 10:28:49.787164 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb" podStartSLOduration=112.381565983 podStartE2EDuration="1m57.78714373s" podCreationTimestamp="2025-10-11 10:26:52 +0000 UTC" firstStartedPulling="2025-10-11 10:28:36.870666813 +0000 UTC m=+151.655093522" lastFinishedPulling="2025-10-11 10:28:42.27624455 +0000 UTC m=+157.060671269" observedRunningTime="2025-10-11 10:28:49.771365046 +0000 UTC m=+164.555791755" watchObservedRunningTime="2025-10-11 10:28:49.78714373 +0000 UTC m=+164.571570439" Oct 11 10:28:49.789706 master-2 kubenswrapper[4776]: I1011 10:28:49.789654 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" podStartSLOduration=135.586553187 podStartE2EDuration="2m20.789645942s" podCreationTimestamp="2025-10-11 10:26:29 +0000 UTC" firstStartedPulling="2025-10-11 10:28:37.073109613 +0000 UTC m=+151.857536322" lastFinishedPulling="2025-10-11 10:28:42.276202368 +0000 UTC m=+157.060629077" observedRunningTime="2025-10-11 10:28:49.786491672 +0000 UTC m=+164.570918381" watchObservedRunningTime="2025-10-11 10:28:49.789645942 +0000 UTC m=+164.574072651" Oct 11 10:28:49.960759 master-2 kubenswrapper[4776]: I1011 10:28:49.960137 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-5tqrt"] Oct 11 10:28:49.961477 master-2 kubenswrapper[4776]: I1011 10:28:49.961439 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.074756 master-2 kubenswrapper[4776]: I1011 10:28:50.074711 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-sys\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.074756 master-2 kubenswrapper[4776]: I1011 10:28:50.074764 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-systemd\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074826 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-host\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074878 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-lib-modules\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074911 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-var-lib-kubelet\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074928 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysctl-conf\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074943 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzpwk\" (UniqueName: \"kubernetes.io/projected/4347a983-767e-44a3-92e8-74386c4e2e82-kube-api-access-bzpwk\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074965 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysctl-d\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075078 master-2 kubenswrapper[4776]: I1011 10:28:50.074991 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4347a983-767e-44a3-92e8-74386c4e2e82-tmp\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075489 master-2 kubenswrapper[4776]: I1011 10:28:50.075162 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-kubernetes\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075489 master-2 kubenswrapper[4776]: I1011 10:28:50.075273 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-run\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075489 master-2 kubenswrapper[4776]: I1011 10:28:50.075301 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysconfig\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075489 master-2 kubenswrapper[4776]: I1011 10:28:50.075318 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4347a983-767e-44a3-92e8-74386c4e2e82-etc-tuned\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.075489 master-2 kubenswrapper[4776]: I1011 10:28:50.075391 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-modprobe-d\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176398 master-2 kubenswrapper[4776]: I1011 10:28:50.176334 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4347a983-767e-44a3-92e8-74386c4e2e82-tmp\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176398 master-2 kubenswrapper[4776]: I1011 10:28:50.176381 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-kubernetes\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176420 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-run\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176445 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysconfig\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176468 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4347a983-767e-44a3-92e8-74386c4e2e82-etc-tuned\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176501 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-modprobe-d\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176543 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-sys\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176559 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-kubernetes\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176616 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-systemd\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176638 master-2 kubenswrapper[4776]: I1011 10:28:50.176566 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-systemd\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176886 master-2 kubenswrapper[4776]: I1011 10:28:50.176726 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-run\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176886 master-2 kubenswrapper[4776]: I1011 10:28:50.176750 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-host\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176886 master-2 kubenswrapper[4776]: I1011 10:28:50.176789 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-lib-modules\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176886 master-2 kubenswrapper[4776]: I1011 10:28:50.176805 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-modprobe-d\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.176886 master-2 kubenswrapper[4776]: I1011 10:28:50.176852 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysconfig\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177011 master-2 kubenswrapper[4776]: I1011 10:28:50.176899 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-var-lib-kubelet\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177011 master-2 kubenswrapper[4776]: I1011 10:28:50.176906 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-sys\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177011 master-2 kubenswrapper[4776]: I1011 10:28:50.176925 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysctl-conf\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177011 master-2 kubenswrapper[4776]: I1011 10:28:50.176969 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-lib-modules\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177011 master-2 kubenswrapper[4776]: I1011 10:28:50.176948 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-host\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177011 master-2 kubenswrapper[4776]: I1011 10:28:50.177007 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-var-lib-kubelet\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177160 master-2 kubenswrapper[4776]: I1011 10:28:50.177068 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzpwk\" (UniqueName: \"kubernetes.io/projected/4347a983-767e-44a3-92e8-74386c4e2e82-kube-api-access-bzpwk\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177160 master-2 kubenswrapper[4776]: I1011 10:28:50.177092 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysctl-conf\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177160 master-2 kubenswrapper[4776]: I1011 10:28:50.177114 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysctl-d\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.177234 master-2 kubenswrapper[4776]: I1011 10:28:50.177180 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/4347a983-767e-44a3-92e8-74386c4e2e82-etc-sysctl-d\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.180656 master-2 kubenswrapper[4776]: I1011 10:28:50.180621 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4347a983-767e-44a3-92e8-74386c4e2e82-tmp\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.186999 master-2 kubenswrapper[4776]: I1011 10:28:50.186950 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/4347a983-767e-44a3-92e8-74386c4e2e82-etc-tuned\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.198166 master-2 kubenswrapper[4776]: I1011 10:28:50.198128 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzpwk\" (UniqueName: \"kubernetes.io/projected/4347a983-767e-44a3-92e8-74386c4e2e82-kube-api-access-bzpwk\") pod \"tuned-5tqrt\" (UID: \"4347a983-767e-44a3-92e8-74386c4e2e82\") " pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.291232 master-2 kubenswrapper[4776]: I1011 10:28:50.291089 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" Oct 11 10:28:50.306649 master-2 kubenswrapper[4776]: W1011 10:28:50.306592 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4347a983_767e_44a3_92e8_74386c4e2e82.slice/crio-12e46036439fb4dbe17ce032eaf899c78e9c38fc32a80d6342dc2f61dd5bc37a WatchSource:0}: Error finding container 12e46036439fb4dbe17ce032eaf899c78e9c38fc32a80d6342dc2f61dd5bc37a: Status 404 returned error can't find the container with id 12e46036439fb4dbe17ce032eaf899c78e9c38fc32a80d6342dc2f61dd5bc37a Oct 11 10:28:50.481358 master-2 kubenswrapper[4776]: I1011 10:28:50.481297 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:50.481358 master-2 kubenswrapper[4776]: I1011 10:28:50.481367 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:50.481630 master-2 kubenswrapper[4776]: E1011 10:28:50.481472 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:28:50.481630 master-2 kubenswrapper[4776]: E1011 10:28:50.481530 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:06.481516535 +0000 UTC m=+181.265943244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:28:50.484994 master-2 kubenswrapper[4776]: I1011 10:28:50.484946 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:28:50.756210 master-2 kubenswrapper[4776]: I1011 10:28:50.754137 4776 generic.go:334] "Generic (PLEG): container finished" podID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerID="b832fb464d44d9bbecf0e8282e7db004cc8bdd8588f8fbb153766321b64a0e01" exitCode=0 Oct 11 10:28:50.756210 master-2 kubenswrapper[4776]: I1011 10:28:50.754266 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerDied","Data":"b832fb464d44d9bbecf0e8282e7db004cc8bdd8588f8fbb153766321b64a0e01"} Oct 11 10:28:50.760014 master-2 kubenswrapper[4776]: I1011 10:28:50.758271 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" event={"ID":"08b7d4e3-1682-4a3b-a757-84ded3a16764","Type":"ContainerStarted","Data":"441bff0c1dbecd89cdf0753230b353069931a1e8819510d825274248cb28dd04"} Oct 11 10:28:50.761304 master-2 kubenswrapper[4776]: I1011 10:28:50.761263 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" event={"ID":"eba1e82e-9f3e-4273-836e-9407cc394b10","Type":"ContainerStarted","Data":"9490ddff809a74a126a8a8c9116d6770a13d848ba3beedd121f2f46f7a6331ef"} Oct 11 10:28:50.762749 master-2 kubenswrapper[4776]: I1011 10:28:50.762703 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sgvjd" event={"ID":"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62","Type":"ContainerStarted","Data":"4fbd5baa22bf7168e39d0a10835a0030b5465676760f7471ce718b48974a41f5"} Oct 11 10:28:50.764094 master-2 kubenswrapper[4776]: I1011 10:28:50.764074 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" event={"ID":"b562963f-7112-411a-a64c-3b8eba909c59","Type":"ContainerStarted","Data":"1e1cfe199ccdaa68d15ec5334ceee0d8a37a2ac146702dfa36b53b722456e784"} Oct 11 10:28:50.768062 master-2 kubenswrapper[4776]: I1011 10:28:50.768031 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" event={"ID":"e350b624-6581-4982-96f3-cd5c37256e02","Type":"ContainerStarted","Data":"7123427ff4a739a7b15f9487edb2172df73189bcf0b6f9273cbe9a3faa4de58f"} Oct 11 10:28:50.771125 master-2 kubenswrapper[4776]: I1011 10:28:50.771095 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-7769d9677-wh775" event={"ID":"893af718-1fec-4b8b-8349-d85f978f4140","Type":"ContainerStarted","Data":"584cdc1a444916dd850d4de78dc9c815c71b05d47e34da7ccf47aad50644ba49"} Oct 11 10:28:50.773138 master-2 kubenswrapper[4776]: I1011 10:28:50.773103 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" event={"ID":"4347a983-767e-44a3-92e8-74386c4e2e82","Type":"ContainerStarted","Data":"ee015f8808258d90664cc71bae7bb11bb6d0962b8e9697f8b806d4475fbe89c5"} Oct 11 10:28:50.773138 master-2 kubenswrapper[4776]: I1011 10:28:50.773133 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" event={"ID":"4347a983-767e-44a3-92e8-74386c4e2e82","Type":"ContainerStarted","Data":"12e46036439fb4dbe17ce032eaf899c78e9c38fc32a80d6342dc2f61dd5bc37a"} Oct 11 10:28:50.775772 master-2 kubenswrapper[4776]: I1011 10:28:50.775743 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-z9trl" event={"ID":"0550ab10-d45d-4526-8551-c1ce0b232bbc","Type":"ContainerStarted","Data":"45a0d23a5a7c9e3b7dbcd399a6e078c4573d85741efbdb1c99131191da106db8"} Oct 11 10:28:50.798583 master-2 kubenswrapper[4776]: I1011 10:28:50.798522 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr" podStartSLOduration=129.035133116 podStartE2EDuration="2m19.798505063s" podCreationTimestamp="2025-10-11 10:26:31 +0000 UTC" firstStartedPulling="2025-10-11 10:28:37.057992957 +0000 UTC m=+151.842419666" lastFinishedPulling="2025-10-11 10:28:47.821364904 +0000 UTC m=+162.605791613" observedRunningTime="2025-10-11 10:28:50.798161363 +0000 UTC m=+165.582588072" watchObservedRunningTime="2025-10-11 10:28:50.798505063 +0000 UTC m=+165.582931772" Oct 11 10:28:50.815708 master-2 kubenswrapper[4776]: I1011 10:28:50.815470 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-z9trl" podStartSLOduration=2.815442391 podStartE2EDuration="2.815442391s" podCreationTimestamp="2025-10-11 10:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:28:50.812158646 +0000 UTC m=+165.596585355" watchObservedRunningTime="2025-10-11 10:28:50.815442391 +0000 UTC m=+165.599869100" Oct 11 10:28:50.829161 master-2 kubenswrapper[4776]: I1011 10:28:50.828856 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr" podStartSLOduration=108.551456274 podStartE2EDuration="2m0.828837497s" podCreationTimestamp="2025-10-11 10:26:50 +0000 UTC" firstStartedPulling="2025-10-11 10:28:37.236474537 +0000 UTC m=+152.020901286" lastFinishedPulling="2025-10-11 10:28:49.51385576 +0000 UTC m=+164.298282509" observedRunningTime="2025-10-11 10:28:50.828196038 +0000 UTC m=+165.612622747" watchObservedRunningTime="2025-10-11 10:28:50.828837497 +0000 UTC m=+165.613264206" Oct 11 10:28:50.897130 master-2 kubenswrapper[4776]: I1011 10:28:50.896735 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv" podStartSLOduration=94.485196283 podStartE2EDuration="1m39.896718931s" podCreationTimestamp="2025-10-11 10:27:11 +0000 UTC" firstStartedPulling="2025-10-11 10:28:36.870509918 +0000 UTC m=+151.654936627" lastFinishedPulling="2025-10-11 10:28:42.282032556 +0000 UTC m=+157.066459275" observedRunningTime="2025-10-11 10:28:50.896293189 +0000 UTC m=+165.680719888" watchObservedRunningTime="2025-10-11 10:28:50.896718931 +0000 UTC m=+165.681145640" Oct 11 10:28:50.902849 master-2 kubenswrapper[4776]: I1011 10:28:50.902778 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-7769d9677-wh775" podStartSLOduration=108.549571228 podStartE2EDuration="1m53.902761375s" podCreationTimestamp="2025-10-11 10:26:57 +0000 UTC" firstStartedPulling="2025-10-11 10:28:36.923118314 +0000 UTC m=+151.707545023" lastFinishedPulling="2025-10-11 10:28:42.276308451 +0000 UTC m=+157.060735170" observedRunningTime="2025-10-11 10:28:50.849596714 +0000 UTC m=+165.634023423" watchObservedRunningTime="2025-10-11 10:28:50.902761375 +0000 UTC m=+165.687188084" Oct 11 10:28:50.914697 master-2 kubenswrapper[4776]: I1011 10:28:50.914620 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-5tqrt" podStartSLOduration=1.914595515 podStartE2EDuration="1.914595515s" podCreationTimestamp="2025-10-11 10:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:28:50.913050691 +0000 UTC m=+165.697477400" watchObservedRunningTime="2025-10-11 10:28:50.914595515 +0000 UTC m=+165.699022224" Oct 11 10:28:51.780902 master-2 kubenswrapper[4776]: I1011 10:28:51.780776 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerStarted","Data":"a486fb47915dcf562b4049ef498fba0a79dc2f0d9c2b35c61e3a77be9dcdeae7"} Oct 11 10:28:52.786817 master-2 kubenswrapper[4776]: I1011 10:28:52.786722 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sgvjd" event={"ID":"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62","Type":"ContainerStarted","Data":"77e97cf5afe4c800f46be81a45fd1c5b7ad05b15de9779b61b57bc99ea5963db"} Oct 11 10:28:52.786817 master-2 kubenswrapper[4776]: I1011 10:28:52.786776 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sgvjd" event={"ID":"e3f3ba3c-1d27-4529-9ae3-a61f88e50b62","Type":"ContainerStarted","Data":"69cfa4bd3903110c0f93b93c280a5ba53c6b44fa6d9f0abdd2bdf1bd106527d9"} Oct 11 10:28:52.786817 master-2 kubenswrapper[4776]: I1011 10:28:52.786821 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sgvjd" Oct 11 10:28:52.789203 master-2 kubenswrapper[4776]: I1011 10:28:52.789162 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerStarted","Data":"2fdacc499227869c48e6c020ccf86a1927bcc28943d27adf9666ea1d0e17f652"} Oct 11 10:28:52.792867 master-2 kubenswrapper[4776]: I1011 10:28:52.792828 4776 generic.go:334] "Generic (PLEG): container finished" podID="e350b624-6581-4982-96f3-cd5c37256e02" containerID="e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278" exitCode=0 Oct 11 10:28:52.793685 master-2 kubenswrapper[4776]: I1011 10:28:52.792948 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" event={"ID":"e350b624-6581-4982-96f3-cd5c37256e02","Type":"ContainerDied","Data":"e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278"} Oct 11 10:28:52.807210 master-2 kubenswrapper[4776]: I1011 10:28:52.807116 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sgvjd" podStartSLOduration=2.515632178 podStartE2EDuration="4.807093961s" podCreationTimestamp="2025-10-11 10:28:48 +0000 UTC" firstStartedPulling="2025-10-11 10:28:49.742776573 +0000 UTC m=+164.527203282" lastFinishedPulling="2025-10-11 10:28:52.034238366 +0000 UTC m=+166.818665065" observedRunningTime="2025-10-11 10:28:52.806628438 +0000 UTC m=+167.591055207" watchObservedRunningTime="2025-10-11 10:28:52.807093961 +0000 UTC m=+167.591520690" Oct 11 10:28:52.861695 master-2 kubenswrapper[4776]: I1011 10:28:52.860108 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podStartSLOduration=8.990944144 podStartE2EDuration="19.860089667s" podCreationTimestamp="2025-10-11 10:28:33 +0000 UTC" firstStartedPulling="2025-10-11 10:28:38.424325372 +0000 UTC m=+153.208752091" lastFinishedPulling="2025-10-11 10:28:49.293470855 +0000 UTC m=+164.077897614" observedRunningTime="2025-10-11 10:28:52.857795321 +0000 UTC m=+167.642222050" watchObservedRunningTime="2025-10-11 10:28:52.860089667 +0000 UTC m=+167.644516376" Oct 11 10:28:52.926995 master-2 kubenswrapper[4776]: I1011 10:28:52.926742 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:52.926995 master-2 kubenswrapper[4776]: I1011 10:28:52.926813 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: I1011 10:28:52.935667 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]etcd ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:28:52.935735 master-2 kubenswrapper[4776]: livez check failed Oct 11 10:28:52.936556 master-2 kubenswrapper[4776]: I1011 10:28:52.935768 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:28:53.800590 master-2 kubenswrapper[4776]: I1011 10:28:53.800490 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" event={"ID":"e350b624-6581-4982-96f3-cd5c37256e02","Type":"ContainerStarted","Data":"c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912"} Oct 11 10:28:53.820729 master-2 kubenswrapper[4776]: I1011 10:28:53.820607 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podStartSLOduration=8.539214492 podStartE2EDuration="10.820576185s" podCreationTimestamp="2025-10-11 10:28:43 +0000 UTC" firstStartedPulling="2025-10-11 10:28:49.737162331 +0000 UTC m=+164.521589040" lastFinishedPulling="2025-10-11 10:28:52.018524024 +0000 UTC m=+166.802950733" observedRunningTime="2025-10-11 10:28:53.817471646 +0000 UTC m=+168.601898385" watchObservedRunningTime="2025-10-11 10:28:53.820576185 +0000 UTC m=+168.605002894" Oct 11 10:28:54.445451 master-2 kubenswrapper[4776]: I1011 10:28:54.445339 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:28:54.450321 master-2 kubenswrapper[4776]: I1011 10:28:54.450271 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7b07707-84bd-43a6-a43d-6680decaa210-serving-cert\") pod \"cluster-version-operator-55bd67947c-tpbwx\" (UID: \"b7b07707-84bd-43a6-a43d-6680decaa210\") " pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:28:57.366630 master-2 kubenswrapper[4776]: I1011 10:28:57.366538 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:57.367258 master-2 kubenswrapper[4776]: I1011 10:28:57.366652 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:57.375742 master-2 kubenswrapper[4776]: I1011 10:28:57.375695 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:57.836138 master-2 kubenswrapper[4776]: I1011 10:28:57.836078 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:28:57.935587 master-2 kubenswrapper[4776]: I1011 10:28:57.935491 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:28:57.943366 master-2 kubenswrapper[4776]: I1011 10:28:57.943285 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:29:03.057813 master-2 kubenswrapper[4776]: I1011 10:29:03.057728 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:29:03.058523 master-2 kubenswrapper[4776]: I1011 10:29:03.058299 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" Oct 11 10:29:03.085946 master-2 kubenswrapper[4776]: W1011 10:29:03.085882 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7b07707_84bd_43a6_a43d_6680decaa210.slice/crio-0d15a643ea56b9893fbb1737fa71c1a6d03dea576b0aff6e3f4a0561257eb624 WatchSource:0}: Error finding container 0d15a643ea56b9893fbb1737fa71c1a6d03dea576b0aff6e3f4a0561257eb624: Status 404 returned error can't find the container with id 0d15a643ea56b9893fbb1737fa71c1a6d03dea576b0aff6e3f4a0561257eb624 Oct 11 10:29:03.577611 master-2 kubenswrapper[4776]: I1011 10:29:03.577532 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:29:03.577896 master-2 kubenswrapper[4776]: E1011 10:29:03.577842 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:03.578015 master-2 kubenswrapper[4776]: E1011 10:29:03.577980 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:35.57794556 +0000 UTC m=+210.362372299 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:29:03.793129 master-2 kubenswrapper[4776]: I1011 10:29:03.793061 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sgvjd" Oct 11 10:29:03.858620 master-2 kubenswrapper[4776]: I1011 10:29:03.858492 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" event={"ID":"b7b07707-84bd-43a6-a43d-6680decaa210","Type":"ContainerStarted","Data":"0d15a643ea56b9893fbb1737fa71c1a6d03dea576b0aff6e3f4a0561257eb624"} Oct 11 10:29:05.870844 master-2 kubenswrapper[4776]: I1011 10:29:05.870717 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" event={"ID":"b7b07707-84bd-43a6-a43d-6680decaa210","Type":"ContainerStarted","Data":"173f2aafa4e9f75815282d30aaf59a9c91879c49ed9a0dc06484b03a065a2298"} Oct 11 10:29:05.889615 master-2 kubenswrapper[4776]: I1011 10:29:05.889431 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx" podStartSLOduration=137.825768576 podStartE2EDuration="2m19.889389399s" podCreationTimestamp="2025-10-11 10:26:46 +0000 UTC" firstStartedPulling="2025-10-11 10:29:03.088469855 +0000 UTC m=+177.872896614" lastFinishedPulling="2025-10-11 10:29:05.152090728 +0000 UTC m=+179.936517437" observedRunningTime="2025-10-11 10:29:05.885728294 +0000 UTC m=+180.670155043" watchObservedRunningTime="2025-10-11 10:29:05.889389399 +0000 UTC m=+180.673816148" Oct 11 10:29:06.517653 master-2 kubenswrapper[4776]: I1011 10:29:06.517593 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:29:06.518129 master-2 kubenswrapper[4776]: E1011 10:29:06.517803 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:06.518230 master-2 kubenswrapper[4776]: E1011 10:29:06.518210 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:38.518177545 +0000 UTC m=+213.302604284 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:29:08.548237 master-2 kubenswrapper[4776]: I1011 10:29:08.548059 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:29:08.548237 master-2 kubenswrapper[4776]: I1011 10:29:08.548209 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:29:08.555202 master-2 kubenswrapper[4776]: I1011 10:29:08.555147 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:29:08.555891 master-2 kubenswrapper[4776]: I1011 10:29:08.555838 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dee5be-e631-462d-8a2c-51a2031a83a2-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-wq4jf\" (UID: \"66dee5be-e631-462d-8a2c-51a2031a83a2\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:29:08.649857 master-2 kubenswrapper[4776]: I1011 10:29:08.649656 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:29:08.649857 master-2 kubenswrapper[4776]: I1011 10:29:08.649827 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:29:08.649857 master-2 kubenswrapper[4776]: I1011 10:29:08.649872 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.649943 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.649983 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.650033 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.650069 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.650153 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.650185 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:29:08.650240 master-2 kubenswrapper[4776]: I1011 10:29:08.650235 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:29:08.655588 master-2 kubenswrapper[4776]: I1011 10:29:08.655504 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e3281eb7-fb96-4bae-8c55-b79728d426b0-srv-cert\") pod \"catalog-operator-f966fb6f8-8gkqg\" (UID: \"e3281eb7-fb96-4bae-8c55-b79728d426b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:29:08.655740 master-2 kubenswrapper[4776]: I1011 10:29:08.655506 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-s5r5b\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:29:08.656166 master-2 kubenswrapper[4776]: I1011 10:29:08.656100 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4354488-1b32-422d-bb06-767a952192a5-srv-cert\") pod \"olm-operator-867f8475d9-8lf59\" (UID: \"d4354488-1b32-422d-bb06-767a952192a5\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:29:08.657337 master-2 kubenswrapper[4776]: I1011 10:29:08.656737 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e20ebc39-150b-472a-bb22-328d8f5db87b-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-xzntp\" (UID: \"e20ebc39-150b-472a-bb22-328d8f5db87b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:29:08.657337 master-2 kubenswrapper[4776]: I1011 10:29:08.656827 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbaa6ca7-9865-42f6-8030-2decf702caa1-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-h8588\" (UID: \"dbaa6ca7-9865-42f6-8030-2decf702caa1\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:29:08.657337 master-2 kubenswrapper[4776]: I1011 10:29:08.657276 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-5r2t9\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:29:08.657337 master-2 kubenswrapper[4776]: I1011 10:29:08.657289 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/548333d7-2374-4c38-b4fd-45c2bee2ac4e-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-b88g6\" (UID: \"548333d7-2374-4c38-b4fd-45c2bee2ac4e\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:29:08.657719 master-2 kubenswrapper[4776]: I1011 10:29:08.657605 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e860f23-9dae-4606-9426-0edec38a332f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-bjntd\" (UID: \"7e860f23-9dae-4606-9426-0edec38a332f\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:29:08.659125 master-2 kubenswrapper[4776]: I1011 10:29:08.659069 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7652e0ca-2d18-48c7-80e0-f4a936038377-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-wsmdd\" (UID: \"7652e0ca-2d18-48c7-80e0-f4a936038377\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:29:08.659468 master-2 kubenswrapper[4776]: I1011 10:29:08.659352 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e4536c84-d8f3-4808-bf8b-9b40695f46de-proxy-tls\") pod \"machine-config-operator-7b75469658-jtmwh\" (UID: \"e4536c84-d8f3-4808-bf8b-9b40695f46de\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:29:08.678842 master-2 kubenswrapper[4776]: I1011 10:29:08.678792 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:29:08.697773 master-2 kubenswrapper[4776]: I1011 10:29:08.697692 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:29:08.728524 master-2 kubenswrapper[4776]: I1011 10:29:08.728438 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" Oct 11 10:29:08.797018 master-2 kubenswrapper[4776]: I1011 10:29:08.796959 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:29:08.804917 master-2 kubenswrapper[4776]: I1011 10:29:08.804444 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:29:08.838282 master-2 kubenswrapper[4776]: I1011 10:29:08.838221 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" Oct 11 10:29:08.877367 master-2 kubenswrapper[4776]: I1011 10:29:08.875435 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:29:08.886552 master-2 kubenswrapper[4776]: I1011 10:29:08.886495 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" Oct 11 10:29:08.901146 master-2 kubenswrapper[4776]: I1011 10:29:08.901093 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" Oct 11 10:29:08.927423 master-2 kubenswrapper[4776]: I1011 10:29:08.927298 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" Oct 11 10:29:08.948641 master-2 kubenswrapper[4776]: I1011 10:29:08.947998 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:29:09.109453 master-2 kubenswrapper[4776]: I1011 10:29:09.107353 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd"] Oct 11 10:29:09.162305 master-2 kubenswrapper[4776]: I1011 10:29:09.162254 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp"] Oct 11 10:29:09.173501 master-2 kubenswrapper[4776]: I1011 10:29:09.173135 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-5r2t9"] Oct 11 10:29:09.249244 master-2 kubenswrapper[4776]: I1011 10:29:09.249201 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588"] Oct 11 10:29:09.250598 master-2 kubenswrapper[4776]: I1011 10:29:09.250501 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-s5r5b"] Oct 11 10:29:09.257021 master-2 kubenswrapper[4776]: W1011 10:29:09.256971 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbaa6ca7_9865_42f6_8030_2decf702caa1.slice/crio-942a0a636fb02acdce7f5ca1af79d6c5459f3029a0686eb940200927b07ce9aa WatchSource:0}: Error finding container 942a0a636fb02acdce7f5ca1af79d6c5459f3029a0686eb940200927b07ce9aa: Status 404 returned error can't find the container with id 942a0a636fb02acdce7f5ca1af79d6c5459f3029a0686eb940200927b07ce9aa Oct 11 10:29:09.257882 master-2 kubenswrapper[4776]: W1011 10:29:09.257855 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-d97eb420aaba8fb64f806ab9e8c4614e0403881645e49bc4703d4dfecdcf78f8 WatchSource:0}: Error finding container d97eb420aaba8fb64f806ab9e8c4614e0403881645e49bc4703d4dfecdcf78f8: Status 404 returned error can't find the container with id d97eb420aaba8fb64f806ab9e8c4614e0403881645e49bc4703d4dfecdcf78f8 Oct 11 10:29:09.393556 master-2 kubenswrapper[4776]: I1011 10:29:09.393503 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf"] Oct 11 10:29:09.400134 master-2 kubenswrapper[4776]: W1011 10:29:09.400094 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66dee5be_e631_462d_8a2c_51a2031a83a2.slice/crio-2b875e637bce4c66d9aee618d210f875af9b88b999fa1547741d12d0c13fb027 WatchSource:0}: Error finding container 2b875e637bce4c66d9aee618d210f875af9b88b999fa1547741d12d0c13fb027: Status 404 returned error can't find the container with id 2b875e637bce4c66d9aee618d210f875af9b88b999fa1547741d12d0c13fb027 Oct 11 10:29:09.432369 master-2 kubenswrapper[4776]: I1011 10:29:09.432316 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh"] Oct 11 10:29:09.437969 master-2 kubenswrapper[4776]: I1011 10:29:09.437922 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59"] Oct 11 10:29:09.439641 master-2 kubenswrapper[4776]: W1011 10:29:09.439609 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4536c84_d8f3_4808_bf8b_9b40695f46de.slice/crio-d077a5eab5d81ce2290e601c65e085ee2d5d37c0530bcc71c015b79e447f81b0 WatchSource:0}: Error finding container d077a5eab5d81ce2290e601c65e085ee2d5d37c0530bcc71c015b79e447f81b0: Status 404 returned error can't find the container with id d077a5eab5d81ce2290e601c65e085ee2d5d37c0530bcc71c015b79e447f81b0 Oct 11 10:29:09.441444 master-2 kubenswrapper[4776]: W1011 10:29:09.441400 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4354488_1b32_422d_bb06_767a952192a5.slice/crio-a556a45ac137fda790c7adca931defb5e82d192fa03e7152cd0556a3c5ba907b WatchSource:0}: Error finding container a556a45ac137fda790c7adca931defb5e82d192fa03e7152cd0556a3c5ba907b: Status 404 returned error can't find the container with id a556a45ac137fda790c7adca931defb5e82d192fa03e7152cd0556a3c5ba907b Oct 11 10:29:09.446621 master-2 kubenswrapper[4776]: I1011 10:29:09.446590 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg"] Oct 11 10:29:09.448269 master-2 kubenswrapper[4776]: I1011 10:29:09.448179 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-9dbb96f7-b88g6"] Oct 11 10:29:09.453745 master-2 kubenswrapper[4776]: W1011 10:29:09.453662 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3281eb7_fb96_4bae_8c55_b79728d426b0.slice/crio-8bbd939af8c063654038f5f4b60919b6db007b5064acc7639052fd6ecc1e54ff WatchSource:0}: Error finding container 8bbd939af8c063654038f5f4b60919b6db007b5064acc7639052fd6ecc1e54ff: Status 404 returned error can't find the container with id 8bbd939af8c063654038f5f4b60919b6db007b5064acc7639052fd6ecc1e54ff Oct 11 10:29:09.455537 master-2 kubenswrapper[4776]: W1011 10:29:09.455504 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod548333d7_2374_4c38_b4fd_45c2bee2ac4e.slice/crio-724b4e4876a7e65a5dfd8a937e512c0c0dab4fa561ff2efd2ea0fe10cd52f778 WatchSource:0}: Error finding container 724b4e4876a7e65a5dfd8a937e512c0c0dab4fa561ff2efd2ea0fe10cd52f778: Status 404 returned error can't find the container with id 724b4e4876a7e65a5dfd8a937e512c0c0dab4fa561ff2efd2ea0fe10cd52f778 Oct 11 10:29:09.594853 master-2 kubenswrapper[4776]: I1011 10:29:09.594728 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd"] Oct 11 10:29:09.900707 master-2 kubenswrapper[4776]: I1011 10:29:09.900557 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" event={"ID":"66dee5be-e631-462d-8a2c-51a2031a83a2","Type":"ContainerStarted","Data":"2b875e637bce4c66d9aee618d210f875af9b88b999fa1547741d12d0c13fb027"} Oct 11 10:29:09.902151 master-2 kubenswrapper[4776]: I1011 10:29:09.902100 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" event={"ID":"64310b0b-bae1-4ad3-b106-6d59d47d29b2","Type":"ContainerStarted","Data":"e72cc89f7bb8839ad3fcaec89df9b0ae1c41473603f0bffc6a5201981557d826"} Oct 11 10:29:09.904696 master-2 kubenswrapper[4776]: I1011 10:29:09.904590 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" event={"ID":"e4536c84-d8f3-4808-bf8b-9b40695f46de","Type":"ContainerStarted","Data":"174f4a4d112f231ff625c542cc912a8e0a801f8c86c1b8c10689aa8a9d412a99"} Oct 11 10:29:09.904696 master-2 kubenswrapper[4776]: I1011 10:29:09.904637 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" event={"ID":"e4536c84-d8f3-4808-bf8b-9b40695f46de","Type":"ContainerStarted","Data":"873a86c033a5133f32e69aa7992e031067b942254453e75c7b18b231e747b156"} Oct 11 10:29:09.904696 master-2 kubenswrapper[4776]: I1011 10:29:09.904663 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" event={"ID":"e4536c84-d8f3-4808-bf8b-9b40695f46de","Type":"ContainerStarted","Data":"d077a5eab5d81ce2290e601c65e085ee2d5d37c0530bcc71c015b79e447f81b0"} Oct 11 10:29:09.906428 master-2 kubenswrapper[4776]: I1011 10:29:09.906399 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" event={"ID":"7652e0ca-2d18-48c7-80e0-f4a936038377","Type":"ContainerStarted","Data":"1f4d3a48a71555ffdcdf1c1073fbe86ebf6d442fb70386b341facb9625835980"} Oct 11 10:29:09.911699 master-2 kubenswrapper[4776]: I1011 10:29:09.911617 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" event={"ID":"e20ebc39-150b-472a-bb22-328d8f5db87b","Type":"ContainerStarted","Data":"4bc4056a907ac0ec224d8bd696e843da4318af4a567956c2044bab87181c045c"} Oct 11 10:29:09.911699 master-2 kubenswrapper[4776]: I1011 10:29:09.911667 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" event={"ID":"e20ebc39-150b-472a-bb22-328d8f5db87b","Type":"ContainerStarted","Data":"30e3aec7445b067ba5a72f4ede367eb6434e3a5b3933f665a386dce066bcbfaa"} Oct 11 10:29:09.914355 master-2 kubenswrapper[4776]: I1011 10:29:09.914138 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" event={"ID":"d4354488-1b32-422d-bb06-767a952192a5","Type":"ContainerStarted","Data":"a556a45ac137fda790c7adca931defb5e82d192fa03e7152cd0556a3c5ba907b"} Oct 11 10:29:09.916037 master-2 kubenswrapper[4776]: I1011 10:29:09.915705 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" event={"ID":"548333d7-2374-4c38-b4fd-45c2bee2ac4e","Type":"ContainerStarted","Data":"5926a997226e15274953a94c6e3df1ecbe8d31dc4836d2e8edaaefd2851bd608"} Oct 11 10:29:09.916037 master-2 kubenswrapper[4776]: I1011 10:29:09.916019 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" event={"ID":"548333d7-2374-4c38-b4fd-45c2bee2ac4e","Type":"ContainerStarted","Data":"724b4e4876a7e65a5dfd8a937e512c0c0dab4fa561ff2efd2ea0fe10cd52f778"} Oct 11 10:29:09.916604 master-2 kubenswrapper[4776]: I1011 10:29:09.916561 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" event={"ID":"cbf33a7e-abea-411d-9a19-85cfe67debe3","Type":"ContainerStarted","Data":"d97eb420aaba8fb64f806ab9e8c4614e0403881645e49bc4703d4dfecdcf78f8"} Oct 11 10:29:09.917791 master-2 kubenswrapper[4776]: I1011 10:29:09.917748 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" event={"ID":"e3281eb7-fb96-4bae-8c55-b79728d426b0","Type":"ContainerStarted","Data":"8bbd939af8c063654038f5f4b60919b6db007b5064acc7639052fd6ecc1e54ff"} Oct 11 10:29:09.918983 master-2 kubenswrapper[4776]: I1011 10:29:09.918893 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" event={"ID":"dbaa6ca7-9865-42f6-8030-2decf702caa1","Type":"ContainerStarted","Data":"942a0a636fb02acdce7f5ca1af79d6c5459f3029a0686eb940200927b07ce9aa"} Oct 11 10:29:09.919942 master-2 kubenswrapper[4776]: I1011 10:29:09.919914 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" event={"ID":"7e860f23-9dae-4606-9426-0edec38a332f","Type":"ContainerStarted","Data":"b805db0d0bd2ed8118b82e487667e217574535ac24dd585e38e0f7c1717a52dd"} Oct 11 10:29:09.923324 master-2 kubenswrapper[4776]: I1011 10:29:09.923285 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh" podStartSLOduration=157.923276427 podStartE2EDuration="2m37.923276427s" podCreationTimestamp="2025-10-11 10:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:09.921290649 +0000 UTC m=+184.705717358" watchObservedRunningTime="2025-10-11 10:29:09.923276427 +0000 UTC m=+184.707703136" Oct 11 10:29:12.448804 master-2 kubenswrapper[4776]: I1011 10:29:12.448738 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-xmz7m"] Oct 11 10:29:12.449416 master-2 kubenswrapper[4776]: I1011 10:29:12.449387 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.454004 master-2 kubenswrapper[4776]: I1011 10:29:12.453903 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 11 10:29:12.499373 master-2 kubenswrapper[4776]: I1011 10:29:12.499301 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phzns\" (UniqueName: \"kubernetes.io/projected/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-kube-api-access-phzns\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.499373 master-2 kubenswrapper[4776]: I1011 10:29:12.499371 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-mcd-auth-proxy-config\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.499682 master-2 kubenswrapper[4776]: I1011 10:29:12.499448 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-rootfs\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.499797 master-2 kubenswrapper[4776]: I1011 10:29:12.499706 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-proxy-tls\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.600661 master-2 kubenswrapper[4776]: I1011 10:29:12.600578 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-proxy-tls\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.600661 master-2 kubenswrapper[4776]: I1011 10:29:12.600688 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phzns\" (UniqueName: \"kubernetes.io/projected/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-kube-api-access-phzns\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.600661 master-2 kubenswrapper[4776]: I1011 10:29:12.600729 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-mcd-auth-proxy-config\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.600661 master-2 kubenswrapper[4776]: I1011 10:29:12.600764 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-rootfs\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.601189 master-2 kubenswrapper[4776]: I1011 10:29:12.600852 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-rootfs\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.601757 master-2 kubenswrapper[4776]: I1011 10:29:12.601727 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-mcd-auth-proxy-config\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.607139 master-2 kubenswrapper[4776]: I1011 10:29:12.607103 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-proxy-tls\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.625082 master-2 kubenswrapper[4776]: I1011 10:29:12.625028 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phzns\" (UniqueName: \"kubernetes.io/projected/cdb1ed8c-c61c-48d1-88c2-66bf2783d131-kube-api-access-phzns\") pod \"machine-config-daemon-xmz7m\" (UID: \"cdb1ed8c-c61c-48d1-88c2-66bf2783d131\") " pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:12.682010 master-2 kubenswrapper[4776]: I1011 10:29:12.681959 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-jdkgd" Oct 11 10:29:12.767365 master-2 kubenswrapper[4776]: I1011 10:29:12.767224 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" Oct 11 10:29:20.978195 master-2 kubenswrapper[4776]: I1011 10:29:20.977498 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" event={"ID":"e3281eb7-fb96-4bae-8c55-b79728d426b0","Type":"ContainerStarted","Data":"10b004bcf8fd1ef0733b195df6589766b1519ee70424b80772e6e7e1bc36c75e"} Oct 11 10:29:20.978195 master-2 kubenswrapper[4776]: I1011 10:29:20.977915 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:29:20.988001 master-2 kubenswrapper[4776]: I1011 10:29:20.987076 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" event={"ID":"dbaa6ca7-9865-42f6-8030-2decf702caa1","Type":"ContainerStarted","Data":"44ceb896cc8343bbb3f15f6ce236e68c97335c0859c31751e10b1cff6a07681c"} Oct 11 10:29:20.988001 master-2 kubenswrapper[4776]: I1011 10:29:20.987208 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" Oct 11 10:29:20.989484 master-2 kubenswrapper[4776]: I1011 10:29:20.988978 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" event={"ID":"e20ebc39-150b-472a-bb22-328d8f5db87b","Type":"ContainerStarted","Data":"ebb38a29026c752699221fdf069077ff027321233818c7fd1baeae0ce79ca4c1"} Oct 11 10:29:20.990361 master-2 kubenswrapper[4776]: I1011 10:29:20.989857 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:29:20.991760 master-2 kubenswrapper[4776]: I1011 10:29:20.991430 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" event={"ID":"7e860f23-9dae-4606-9426-0edec38a332f","Type":"ContainerStarted","Data":"abcfaa1bb3973d38dfde3d5e4981f116ced123c94a2a75e51dd75e4997f3fd4d"} Oct 11 10:29:20.997173 master-2 kubenswrapper[4776]: I1011 10:29:20.997121 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" event={"ID":"d4354488-1b32-422d-bb06-767a952192a5","Type":"ContainerStarted","Data":"d79633d40a0d1afd1ab3529abe17263cbee7f6776b1af5edf3ba2ba654773573"} Oct 11 10:29:20.997431 master-2 kubenswrapper[4776]: I1011 10:29:20.997372 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg" podStartSLOduration=157.908370068 podStartE2EDuration="2m48.9973565s" podCreationTimestamp="2025-10-11 10:26:32 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.455216249 +0000 UTC m=+184.239642958" lastFinishedPulling="2025-10-11 10:29:20.544202681 +0000 UTC m=+195.328629390" observedRunningTime="2025-10-11 10:29:20.995850536 +0000 UTC m=+195.780277245" watchObservedRunningTime="2025-10-11 10:29:20.9973565 +0000 UTC m=+195.781783209" Oct 11 10:29:20.997962 master-2 kubenswrapper[4776]: I1011 10:29:20.997904 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:29:21.004884 master-2 kubenswrapper[4776]: I1011 10:29:21.003145 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" event={"ID":"64310b0b-bae1-4ad3-b106-6d59d47d29b2","Type":"ContainerStarted","Data":"70bd3f9c400e0f2d03040b409d0be80f6f7bbda878ae150537f2b4ec7baf71bd"} Oct 11 10:29:21.006785 master-2 kubenswrapper[4776]: I1011 10:29:21.006762 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" event={"ID":"cbf33a7e-abea-411d-9a19-85cfe67debe3","Type":"ContainerStarted","Data":"38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3"} Oct 11 10:29:21.007309 master-2 kubenswrapper[4776]: I1011 10:29:21.007289 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" Oct 11 10:29:21.008519 master-2 kubenswrapper[4776]: I1011 10:29:21.008455 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" event={"ID":"cdb1ed8c-c61c-48d1-88c2-66bf2783d131","Type":"ContainerStarted","Data":"c085d473f14ad61623a8d88060e27a54a61d086f47065481d8611834348b20db"} Oct 11 10:29:21.008607 master-2 kubenswrapper[4776]: I1011 10:29:21.008532 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" event={"ID":"cdb1ed8c-c61c-48d1-88c2-66bf2783d131","Type":"ContainerStarted","Data":"db89193149f068c7107a0d2d00501a02dbd4f1b90fa7b15d8d5b29eb670e0e82"} Oct 11 10:29:21.008607 master-2 kubenswrapper[4776]: I1011 10:29:21.008547 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" event={"ID":"cdb1ed8c-c61c-48d1-88c2-66bf2783d131","Type":"ContainerStarted","Data":"044d1990d5b903504defaa32786c34d22ae4d3d293b3dec6b8a098517966fc1c"} Oct 11 10:29:21.011414 master-2 kubenswrapper[4776]: I1011 10:29:21.011383 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" event={"ID":"66dee5be-e631-462d-8a2c-51a2031a83a2","Type":"ContainerStarted","Data":"5993ee4c50ac66f983c7275e415dab008a25f4d7f1725733f6cd0c4bfccdb402"} Oct 11 10:29:21.011414 master-2 kubenswrapper[4776]: I1011 10:29:21.011420 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" event={"ID":"66dee5be-e631-462d-8a2c-51a2031a83a2","Type":"ContainerStarted","Data":"d441e6d043ca6a5f4f9c4d53fc9f4517672d8d7ce53e4d5876332aa0dff6a002"} Oct 11 10:29:21.018135 master-2 kubenswrapper[4776]: I1011 10:29:21.017884 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" podStartSLOduration=159.77363488700001 podStartE2EDuration="2m51.01786218s" podCreationTimestamp="2025-10-11 10:26:30 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.316475923 +0000 UTC m=+184.100902632" lastFinishedPulling="2025-10-11 10:29:20.560703216 +0000 UTC m=+195.345129925" observedRunningTime="2025-10-11 10:29:21.016355637 +0000 UTC m=+195.800782366" watchObservedRunningTime="2025-10-11 10:29:21.01786218 +0000 UTC m=+195.802288889" Oct 11 10:29:21.019123 master-2 kubenswrapper[4776]: I1011 10:29:21.019007 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" event={"ID":"548333d7-2374-4c38-b4fd-45c2bee2ac4e","Type":"ContainerStarted","Data":"ba8f18fdcf52199cdff7e52a954cf2889c4e32a293bd45bea24bae811f7ed5c9"} Oct 11 10:29:21.023473 master-2 kubenswrapper[4776]: I1011 10:29:21.023428 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" event={"ID":"7652e0ca-2d18-48c7-80e0-f4a936038377","Type":"ContainerStarted","Data":"bee07b3499003457995a526e2769ae6950a3ee1b71df0e623d05c583f95fa09d"} Oct 11 10:29:21.023866 master-2 kubenswrapper[4776]: I1011 10:29:21.023807 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:29:21.025297 master-2 kubenswrapper[4776]: I1011 10:29:21.025256 4776 patch_prober.go:28] interesting pod/marketplace-operator-c4f798dd4-wsmdd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Oct 11 10:29:21.025378 master-2 kubenswrapper[4776]: I1011 10:29:21.025310 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" podUID="7652e0ca-2d18-48c7-80e0-f4a936038377" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Oct 11 10:29:21.036183 master-2 kubenswrapper[4776]: I1011 10:29:21.036128 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd" podStartSLOduration=144.161916778 podStartE2EDuration="2m35.036109326s" podCreationTimestamp="2025-10-11 10:26:46 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.610040097 +0000 UTC m=+184.394466806" lastFinishedPulling="2025-10-11 10:29:20.484232645 +0000 UTC m=+195.268659354" observedRunningTime="2025-10-11 10:29:21.034312724 +0000 UTC m=+195.818739433" watchObservedRunningTime="2025-10-11 10:29:21.036109326 +0000 UTC m=+195.820536035" Oct 11 10:29:21.051548 master-2 kubenswrapper[4776]: I1011 10:29:21.050855 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588" podStartSLOduration=161.835189561 podStartE2EDuration="2m53.05083739s" podCreationTimestamp="2025-10-11 10:26:28 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.258795923 +0000 UTC m=+184.043222632" lastFinishedPulling="2025-10-11 10:29:20.474443752 +0000 UTC m=+195.258870461" observedRunningTime="2025-10-11 10:29:21.048140892 +0000 UTC m=+195.832567611" watchObservedRunningTime="2025-10-11 10:29:21.05083739 +0000 UTC m=+195.835264099" Oct 11 10:29:21.078936 master-2 kubenswrapper[4776]: I1011 10:29:21.078858 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf" podStartSLOduration=158.005946976 podStartE2EDuration="2m49.078840506s" podCreationTimestamp="2025-10-11 10:26:32 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.402102549 +0000 UTC m=+184.186529258" lastFinishedPulling="2025-10-11 10:29:20.474996079 +0000 UTC m=+195.259422788" observedRunningTime="2025-10-11 10:29:21.077292012 +0000 UTC m=+195.861718731" watchObservedRunningTime="2025-10-11 10:29:21.078840506 +0000 UTC m=+195.863267205" Oct 11 10:29:21.093886 master-2 kubenswrapper[4776]: I1011 10:29:21.093162 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" podStartSLOduration=131.805381682 podStartE2EDuration="2m23.093148578s" podCreationTimestamp="2025-10-11 10:26:58 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.132860137 +0000 UTC m=+183.917286846" lastFinishedPulling="2025-10-11 10:29:20.420627033 +0000 UTC m=+195.205053742" observedRunningTime="2025-10-11 10:29:21.091174281 +0000 UTC m=+195.875600990" watchObservedRunningTime="2025-10-11 10:29:21.093148578 +0000 UTC m=+195.877575287" Oct 11 10:29:21.108152 master-2 kubenswrapper[4776]: I1011 10:29:21.107744 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-9dbb96f7-b88g6" podStartSLOduration=156.098349308 podStartE2EDuration="2m47.107670426s" podCreationTimestamp="2025-10-11 10:26:34 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.536646834 +0000 UTC m=+184.321073553" lastFinishedPulling="2025-10-11 10:29:20.545967962 +0000 UTC m=+195.330394671" observedRunningTime="2025-10-11 10:29:21.105107132 +0000 UTC m=+195.889533841" watchObservedRunningTime="2025-10-11 10:29:21.107670426 +0000 UTC m=+195.892097135" Oct 11 10:29:21.120424 master-2 kubenswrapper[4776]: I1011 10:29:21.118176 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59" podStartSLOduration=159.014957197 podStartE2EDuration="2m50.118157659s" podCreationTimestamp="2025-10-11 10:26:31 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.442731219 +0000 UTC m=+184.227157928" lastFinishedPulling="2025-10-11 10:29:20.545931681 +0000 UTC m=+195.330358390" observedRunningTime="2025-10-11 10:29:21.1171598 +0000 UTC m=+195.901586509" watchObservedRunningTime="2025-10-11 10:29:21.118157659 +0000 UTC m=+195.902584368" Oct 11 10:29:21.152704 master-2 kubenswrapper[4776]: I1011 10:29:21.151892 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xmz7m" podStartSLOduration=9.151870309 podStartE2EDuration="9.151870309s" podCreationTimestamp="2025-10-11 10:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:21.130143973 +0000 UTC m=+195.914570682" watchObservedRunningTime="2025-10-11 10:29:21.151870309 +0000 UTC m=+195.936297058" Oct 11 10:29:21.183953 master-2 kubenswrapper[4776]: I1011 10:29:21.183899 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-kxhjc_7004f3ff-6db8-446d-94c1-1223e975299d/authentication-operator/0.log" Oct 11 10:29:21.782536 master-2 kubenswrapper[4776]: I1011 10:29:21.782469 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-65b6f4d4c9-5wrz6_e350b624-6581-4982-96f3-cd5c37256e02/fix-audit-permissions/0.log" Oct 11 10:29:21.939699 master-2 kubenswrapper[4776]: I1011 10:29:21.939627 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:29:21.944064 master-2 kubenswrapper[4776]: I1011 10:29:21.944024 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35b21a7b-2a5a-4511-a2d5-d950752b4bda-metrics-certs\") pod \"network-metrics-daemon-w52cn\" (UID: \"35b21a7b-2a5a-4511-a2d5-d950752b4bda\") " pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:29:21.983389 master-2 kubenswrapper[4776]: I1011 10:29:21.983309 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-65b6f4d4c9-5wrz6_e350b624-6581-4982-96f3-cd5c37256e02/oauth-apiserver/0.log" Oct 11 10:29:22.031441 master-2 kubenswrapper[4776]: I1011 10:29:22.031379 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" event={"ID":"64310b0b-bae1-4ad3-b106-6d59d47d29b2","Type":"ContainerStarted","Data":"7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829"} Oct 11 10:29:22.034169 master-2 kubenswrapper[4776]: I1011 10:29:22.034043 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" event={"ID":"cbf33a7e-abea-411d-9a19-85cfe67debe3","Type":"ContainerStarted","Data":"4c073c5ee5fd6035e588b594f71843fa0867444b7edf11350aaa49874157a615"} Oct 11 10:29:22.037294 master-2 kubenswrapper[4776]: I1011 10:29:22.037238 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:29:22.051349 master-2 kubenswrapper[4776]: I1011 10:29:22.051271 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" podStartSLOduration=115.810472134 podStartE2EDuration="2m7.051258337s" podCreationTimestamp="2025-10-11 10:27:15 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.179916432 +0000 UTC m=+183.964343141" lastFinishedPulling="2025-10-11 10:29:20.420702635 +0000 UTC m=+195.205129344" observedRunningTime="2025-10-11 10:29:22.050043683 +0000 UTC m=+196.834470382" watchObservedRunningTime="2025-10-11 10:29:22.051258337 +0000 UTC m=+196.835685046" Oct 11 10:29:22.072146 master-2 kubenswrapper[4776]: I1011 10:29:22.072071 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" podStartSLOduration=115.912465572 podStartE2EDuration="2m7.072052167s" podCreationTimestamp="2025-10-11 10:27:15 +0000 UTC" firstStartedPulling="2025-10-11 10:29:09.261005227 +0000 UTC m=+184.045431936" lastFinishedPulling="2025-10-11 10:29:20.420591832 +0000 UTC m=+195.205018531" observedRunningTime="2025-10-11 10:29:22.06903315 +0000 UTC m=+196.853459869" watchObservedRunningTime="2025-10-11 10:29:22.072052167 +0000 UTC m=+196.856478876" Oct 11 10:29:22.172475 master-2 kubenswrapper[4776]: I1011 10:29:22.172405 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w52cn" Oct 11 10:29:22.585342 master-2 kubenswrapper[4776]: I1011 10:29:22.585307 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-7ff449c7c5-cfvjb_6b8dc5b8-3c48-4dba-9992-6e269ca133f1/kube-rbac-proxy/0.log" Oct 11 10:29:22.585533 master-2 kubenswrapper[4776]: I1011 10:29:22.585498 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-w52cn"] Oct 11 10:29:22.589353 master-2 kubenswrapper[4776]: W1011 10:29:22.589306 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35b21a7b_2a5a_4511_a2d5_d950752b4bda.slice/crio-5dcde2e575ebd08bc1de039b107945728fb612faa4db21c614392ec5c88a780b WatchSource:0}: Error finding container 5dcde2e575ebd08bc1de039b107945728fb612faa4db21c614392ec5c88a780b: Status 404 returned error can't find the container with id 5dcde2e575ebd08bc1de039b107945728fb612faa4db21c614392ec5c88a780b Oct 11 10:29:22.783216 master-2 kubenswrapper[4776]: I1011 10:29:22.783182 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-7ff449c7c5-cfvjb_6b8dc5b8-3c48-4dba-9992-6e269ca133f1/cluster-autoscaler-operator/0.log" Oct 11 10:29:22.937495 master-2 kubenswrapper[4776]: I1011 10:29:22.937429 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6"] Oct 11 10:29:22.938210 master-2 kubenswrapper[4776]: I1011 10:29:22.938174 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:22.943345 master-2 kubenswrapper[4776]: I1011 10:29:22.942895 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 11 10:29:22.944102 master-2 kubenswrapper[4776]: I1011 10:29:22.944023 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6"] Oct 11 10:29:22.982617 master-2 kubenswrapper[4776]: I1011 10:29:22.982568 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6c8fbf4498-wq4jf_66dee5be-e631-462d-8a2c-51a2031a83a2/cluster-baremetal-operator/0.log" Oct 11 10:29:23.039042 master-2 kubenswrapper[4776]: I1011 10:29:23.038994 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w52cn" event={"ID":"35b21a7b-2a5a-4511-a2d5-d950752b4bda","Type":"ContainerStarted","Data":"5dcde2e575ebd08bc1de039b107945728fb612faa4db21c614392ec5c88a780b"} Oct 11 10:29:23.054604 master-2 kubenswrapper[4776]: I1011 10:29:23.054536 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5ch6\" (UniqueName: \"kubernetes.io/projected/4e35cfca-8883-465b-b952-cc91f7f5dd81-kube-api-access-m5ch6\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.054820 master-2 kubenswrapper[4776]: I1011 10:29:23.054628 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e35cfca-8883-465b-b952-cc91f7f5dd81-apiservice-cert\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.054820 master-2 kubenswrapper[4776]: I1011 10:29:23.054651 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e35cfca-8883-465b-b952-cc91f7f5dd81-tmpfs\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.054820 master-2 kubenswrapper[4776]: I1011 10:29:23.054713 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e35cfca-8883-465b-b952-cc91f7f5dd81-webhook-cert\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.155917 master-2 kubenswrapper[4776]: I1011 10:29:23.155832 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5ch6\" (UniqueName: \"kubernetes.io/projected/4e35cfca-8883-465b-b952-cc91f7f5dd81-kube-api-access-m5ch6\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.156163 master-2 kubenswrapper[4776]: I1011 10:29:23.155983 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e35cfca-8883-465b-b952-cc91f7f5dd81-tmpfs\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.156163 master-2 kubenswrapper[4776]: I1011 10:29:23.156004 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e35cfca-8883-465b-b952-cc91f7f5dd81-apiservice-cert\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.156163 master-2 kubenswrapper[4776]: I1011 10:29:23.156148 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e35cfca-8883-465b-b952-cc91f7f5dd81-webhook-cert\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.157340 master-2 kubenswrapper[4776]: I1011 10:29:23.157086 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e35cfca-8883-465b-b952-cc91f7f5dd81-tmpfs\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.159638 master-2 kubenswrapper[4776]: I1011 10:29:23.159589 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e35cfca-8883-465b-b952-cc91f7f5dd81-webhook-cert\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.160790 master-2 kubenswrapper[4776]: I1011 10:29:23.160728 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e35cfca-8883-465b-b952-cc91f7f5dd81-apiservice-cert\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.173448 master-2 kubenswrapper[4776]: I1011 10:29:23.173394 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5ch6\" (UniqueName: \"kubernetes.io/projected/4e35cfca-8883-465b-b952-cc91f7f5dd81-kube-api-access-m5ch6\") pod \"packageserver-77c85f5c6-cfrh6\" (UID: \"4e35cfca-8883-465b-b952-cc91f7f5dd81\") " pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.182764 master-2 kubenswrapper[4776]: I1011 10:29:23.182726 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6c8fbf4498-wq4jf_66dee5be-e631-462d-8a2c-51a2031a83a2/baremetal-kube-rbac-proxy/0.log" Oct 11 10:29:23.254268 master-2 kubenswrapper[4776]: I1011 10:29:23.254130 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:23.381340 master-2 kubenswrapper[4776]: I1011 10:29:23.381285 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-84f9cbd5d9-bjntd_7e860f23-9dae-4606-9426-0edec38a332f/control-plane-machine-set-operator/0.log" Oct 11 10:29:23.584496 master-2 kubenswrapper[4776]: I1011 10:29:23.584315 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-9dbb96f7-b88g6_548333d7-2374-4c38-b4fd-45c2bee2ac4e/kube-rbac-proxy/0.log" Oct 11 10:29:23.664612 master-2 kubenswrapper[4776]: I1011 10:29:23.664544 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6"] Oct 11 10:29:23.670309 master-2 kubenswrapper[4776]: W1011 10:29:23.670241 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e35cfca_8883_465b_b952_cc91f7f5dd81.slice/crio-42b9e229d25461501c54106355bb73ca72ed3779d4e12a80d61b0141ed76e6f7 WatchSource:0}: Error finding container 42b9e229d25461501c54106355bb73ca72ed3779d4e12a80d61b0141ed76e6f7: Status 404 returned error can't find the container with id 42b9e229d25461501c54106355bb73ca72ed3779d4e12a80d61b0141ed76e6f7 Oct 11 10:29:23.779570 master-2 kubenswrapper[4776]: I1011 10:29:23.779534 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-9dbb96f7-b88g6_548333d7-2374-4c38-b4fd-45c2bee2ac4e/machine-api-operator/0.log" Oct 11 10:29:23.980276 master-2 kubenswrapper[4776]: I1011 10:29:23.980214 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-7769d9677-wh775_893af718-1fec-4b8b-8349-d85f978f4140/dns-operator/0.log" Oct 11 10:29:24.045577 master-2 kubenswrapper[4776]: I1011 10:29:24.045502 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" event={"ID":"4e35cfca-8883-465b-b952-cc91f7f5dd81","Type":"ContainerStarted","Data":"d331c322f9894436a43d3dc3344c299da66b85b01c2f7d860c8463bca15e8045"} Oct 11 10:29:24.045577 master-2 kubenswrapper[4776]: I1011 10:29:24.045569 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" event={"ID":"4e35cfca-8883-465b-b952-cc91f7f5dd81","Type":"ContainerStarted","Data":"42b9e229d25461501c54106355bb73ca72ed3779d4e12a80d61b0141ed76e6f7"} Oct 11 10:29:24.046422 master-2 kubenswrapper[4776]: I1011 10:29:24.046388 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:24.047650 master-2 kubenswrapper[4776]: I1011 10:29:24.047583 4776 generic.go:334] "Generic (PLEG): container finished" podID="58aef476-6586-47bb-bf45-dbeccac6271a" containerID="4086f54b40e82a9c1520dd01a01f1e17aa8e4bfa53d48bc75f9b65494739f67c" exitCode=0 Oct 11 10:29:24.047650 master-2 kubenswrapper[4776]: I1011 10:29:24.047634 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" event={"ID":"58aef476-6586-47bb-bf45-dbeccac6271a","Type":"ContainerDied","Data":"4086f54b40e82a9c1520dd01a01f1e17aa8e4bfa53d48bc75f9b65494739f67c"} Oct 11 10:29:24.048005 master-2 kubenswrapper[4776]: I1011 10:29:24.047969 4776 scope.go:117] "RemoveContainer" containerID="4086f54b40e82a9c1520dd01a01f1e17aa8e4bfa53d48bc75f9b65494739f67c" Oct 11 10:29:24.067336 master-2 kubenswrapper[4776]: I1011 10:29:24.067261 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" podStartSLOduration=2.067242659 podStartE2EDuration="2.067242659s" podCreationTimestamp="2025-10-11 10:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:24.065523409 +0000 UTC m=+198.849950118" watchObservedRunningTime="2025-10-11 10:29:24.067242659 +0000 UTC m=+198.851669368" Oct 11 10:29:24.076808 master-2 kubenswrapper[4776]: I1011 10:29:24.076758 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t"] Oct 11 10:29:24.077514 master-2 kubenswrapper[4776]: I1011 10:29:24.077485 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.081395 master-2 kubenswrapper[4776]: I1011 10:29:24.081341 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 11 10:29:24.086091 master-2 kubenswrapper[4776]: I1011 10:29:24.086032 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t"] Oct 11 10:29:24.167991 master-2 kubenswrapper[4776]: I1011 10:29:24.167923 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1029b995-20ca-45f4-bccb-e83ccee2075f-mcc-auth-proxy-config\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.168209 master-2 kubenswrapper[4776]: I1011 10:29:24.168021 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgbhl\" (UniqueName: \"kubernetes.io/projected/1029b995-20ca-45f4-bccb-e83ccee2075f-kube-api-access-lgbhl\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.168209 master-2 kubenswrapper[4776]: I1011 10:29:24.168163 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1029b995-20ca-45f4-bccb-e83ccee2075f-proxy-tls\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.180077 master-2 kubenswrapper[4776]: I1011 10:29:24.180039 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-7769d9677-wh775_893af718-1fec-4b8b-8349-d85f978f4140/kube-rbac-proxy/0.log" Oct 11 10:29:24.283419 master-2 kubenswrapper[4776]: I1011 10:29:24.269106 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1029b995-20ca-45f4-bccb-e83ccee2075f-proxy-tls\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.283419 master-2 kubenswrapper[4776]: I1011 10:29:24.269166 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1029b995-20ca-45f4-bccb-e83ccee2075f-mcc-auth-proxy-config\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.283419 master-2 kubenswrapper[4776]: I1011 10:29:24.269196 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgbhl\" (UniqueName: \"kubernetes.io/projected/1029b995-20ca-45f4-bccb-e83ccee2075f-kube-api-access-lgbhl\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.283419 master-2 kubenswrapper[4776]: I1011 10:29:24.270575 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1029b995-20ca-45f4-bccb-e83ccee2075f-mcc-auth-proxy-config\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.367658 master-2 kubenswrapper[4776]: I1011 10:29:24.367615 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1029b995-20ca-45f4-bccb-e83ccee2075f-proxy-tls\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.371981 master-2 kubenswrapper[4776]: I1011 10:29:24.371948 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgbhl\" (UniqueName: \"kubernetes.io/projected/1029b995-20ca-45f4-bccb-e83ccee2075f-kube-api-access-lgbhl\") pod \"machine-config-controller-6dcc7bf8f6-4496t\" (UID: \"1029b995-20ca-45f4-bccb-e83ccee2075f\") " pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.401922 master-2 kubenswrapper[4776]: I1011 10:29:24.401857 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" Oct 11 10:29:24.429257 master-2 kubenswrapper[4776]: I1011 10:29:24.428980 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6" Oct 11 10:29:24.720755 master-2 kubenswrapper[4776]: I1011 10:29:24.720704 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mwqr6"] Oct 11 10:29:24.721959 master-2 kubenswrapper[4776]: I1011 10:29:24.721926 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.727990 master-2 kubenswrapper[4776]: I1011 10:29:24.727932 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwqr6"] Oct 11 10:29:24.775319 master-2 kubenswrapper[4776]: I1011 10:29:24.775235 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-catalog-content\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.775573 master-2 kubenswrapper[4776]: I1011 10:29:24.775400 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tlh8\" (UniqueName: \"kubernetes.io/projected/444ea5b2-c9dc-4685-9f66-2273b30d9045-kube-api-access-7tlh8\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.775573 master-2 kubenswrapper[4776]: I1011 10:29:24.775464 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-utilities\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.779219 master-2 kubenswrapper[4776]: I1011 10:29:24.779170 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-sgvjd_e3f3ba3c-1d27-4529-9ae3-a61f88e50b62/dns/0.log" Oct 11 10:29:24.859195 master-2 kubenswrapper[4776]: I1011 10:29:24.859149 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t"] Oct 11 10:29:24.865318 master-2 kubenswrapper[4776]: W1011 10:29:24.865268 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1029b995_20ca_45f4_bccb_e83ccee2075f.slice/crio-28961758d7110a9ebe5fde4f8561f86b3b5ece7a41c742fe39cdafa150b14f67 WatchSource:0}: Error finding container 28961758d7110a9ebe5fde4f8561f86b3b5ece7a41c742fe39cdafa150b14f67: Status 404 returned error can't find the container with id 28961758d7110a9ebe5fde4f8561f86b3b5ece7a41c742fe39cdafa150b14f67 Oct 11 10:29:24.876273 master-2 kubenswrapper[4776]: I1011 10:29:24.876214 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-utilities\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.876392 master-2 kubenswrapper[4776]: I1011 10:29:24.876317 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-catalog-content\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.876392 master-2 kubenswrapper[4776]: I1011 10:29:24.876376 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tlh8\" (UniqueName: \"kubernetes.io/projected/444ea5b2-c9dc-4685-9f66-2273b30d9045-kube-api-access-7tlh8\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.876765 master-2 kubenswrapper[4776]: I1011 10:29:24.876733 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-utilities\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.876814 master-2 kubenswrapper[4776]: I1011 10:29:24.876767 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-catalog-content\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.893458 master-2 kubenswrapper[4776]: I1011 10:29:24.893390 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tlh8\" (UniqueName: \"kubernetes.io/projected/444ea5b2-c9dc-4685-9f66-2273b30d9045-kube-api-access-7tlh8\") pod \"certified-operators-mwqr6\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:24.984819 master-2 kubenswrapper[4776]: I1011 10:29:24.984738 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-sgvjd_e3f3ba3c-1d27-4529-9ae3-a61f88e50b62/kube-rbac-proxy/0.log" Oct 11 10:29:25.038743 master-2 kubenswrapper[4776]: I1011 10:29:25.038433 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:25.066107 master-2 kubenswrapper[4776]: I1011 10:29:25.066042 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w52cn" event={"ID":"35b21a7b-2a5a-4511-a2d5-d950752b4bda","Type":"ContainerStarted","Data":"aa646dbd8ebc87224fe643fd67ac6d06da9999c61bb42a2952179eca90d79b2f"} Oct 11 10:29:25.066107 master-2 kubenswrapper[4776]: I1011 10:29:25.066101 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w52cn" event={"ID":"35b21a7b-2a5a-4511-a2d5-d950752b4bda","Type":"ContainerStarted","Data":"f85a0c2578f74dc1df3cf78c79f321e17f6d94c3d1623ee8f96a6043898f3a8e"} Oct 11 10:29:25.069057 master-2 kubenswrapper[4776]: I1011 10:29:25.068988 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" event={"ID":"1029b995-20ca-45f4-bccb-e83ccee2075f","Type":"ContainerStarted","Data":"689bf3206b70fe46ed0e99643b190eb90caf56759cb4bc40e6c7a2e98ecadb6a"} Oct 11 10:29:25.069283 master-2 kubenswrapper[4776]: I1011 10:29:25.069077 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" event={"ID":"1029b995-20ca-45f4-bccb-e83ccee2075f","Type":"ContainerStarted","Data":"28961758d7110a9ebe5fde4f8561f86b3b5ece7a41c742fe39cdafa150b14f67"} Oct 11 10:29:25.079776 master-2 kubenswrapper[4776]: I1011 10:29:25.079655 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc" event={"ID":"58aef476-6586-47bb-bf45-dbeccac6271a","Type":"ContainerStarted","Data":"da8404df46b28e243f3a617ea5f5889d8f632ca65d9dfb6ee6a8ca2df35f5786"} Oct 11 10:29:25.085833 master-2 kubenswrapper[4776]: I1011 10:29:25.085755 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-w52cn" podStartSLOduration=130.197507344 podStartE2EDuration="2m12.085737086s" podCreationTimestamp="2025-10-11 10:27:13 +0000 UTC" firstStartedPulling="2025-10-11 10:29:22.591491494 +0000 UTC m=+197.375918203" lastFinishedPulling="2025-10-11 10:29:24.479721246 +0000 UTC m=+199.264147945" observedRunningTime="2025-10-11 10:29:25.082979977 +0000 UTC m=+199.867406686" watchObservedRunningTime="2025-10-11 10:29:25.085737086 +0000 UTC m=+199.870163795" Oct 11 10:29:25.380727 master-2 kubenswrapper[4776]: I1011 10:29:25.380546 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-z9trl_0550ab10-d45d-4526-8551-c1ce0b232bbc/dns-node-resolver/0.log" Oct 11 10:29:25.473731 master-2 kubenswrapper[4776]: I1011 10:29:25.473634 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwqr6"] Oct 11 10:29:25.477433 master-2 kubenswrapper[4776]: W1011 10:29:25.477392 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod444ea5b2_c9dc_4685_9f66_2273b30d9045.slice/crio-3de15bd009d5969b3f80470fb7549e4f068bb4b317e68ee93b70421988c245b5 WatchSource:0}: Error finding container 3de15bd009d5969b3f80470fb7549e4f068bb4b317e68ee93b70421988c245b5: Status 404 returned error can't find the container with id 3de15bd009d5969b3f80470fb7549e4f068bb4b317e68ee93b70421988c245b5 Oct 11 10:29:25.585586 master-2 kubenswrapper[4776]: I1011 10:29:25.585539 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/0.log" Oct 11 10:29:25.982344 master-2 kubenswrapper[4776]: I1011 10:29:25.982277 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/0.log" Oct 11 10:29:26.088199 master-2 kubenswrapper[4776]: I1011 10:29:26.088122 4776 generic.go:334] "Generic (PLEG): container finished" podID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerID="2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82" exitCode=0 Oct 11 10:29:26.088199 master-2 kubenswrapper[4776]: I1011 10:29:26.088182 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerDied","Data":"2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82"} Oct 11 10:29:26.088199 master-2 kubenswrapper[4776]: I1011 10:29:26.088206 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerStarted","Data":"3de15bd009d5969b3f80470fb7549e4f068bb4b317e68ee93b70421988c245b5"} Oct 11 10:29:26.091651 master-2 kubenswrapper[4776]: I1011 10:29:26.091596 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" event={"ID":"1029b995-20ca-45f4-bccb-e83ccee2075f","Type":"ContainerStarted","Data":"fd0627f99e898fedf28d7abd206c16618bb59f65133b2c9236f2d54f3cb2f4c6"} Oct 11 10:29:26.134391 master-2 kubenswrapper[4776]: I1011 10:29:26.134301 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t" podStartSLOduration=2.134282595 podStartE2EDuration="2.134282595s" podCreationTimestamp="2025-10-11 10:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:26.131644393 +0000 UTC m=+200.916071122" watchObservedRunningTime="2025-10-11 10:29:26.134282595 +0000 UTC m=+200.918709314" Oct 11 10:29:26.179588 master-2 kubenswrapper[4776]: I1011 10:29:26.179491 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/kube-rbac-proxy/0.log" Oct 11 10:29:26.785207 master-2 kubenswrapper[4776]: I1011 10:29:26.785150 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68f5d95b74-9h5mv_6967590c-695e-4e20-964b-0c643abdf367/kube-apiserver-operator/0.log" Oct 11 10:29:27.184329 master-2 kubenswrapper[4776]: I1011 10:29:27.184272 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-5d85974df9-5gj77_e487f283-7482-463c-90b6-a812e00d0e35/kube-controller-manager-operator/0.log" Oct 11 10:29:27.584060 master-2 kubenswrapper[4776]: I1011 10:29:27.583913 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-766d6b44f6-s5shc_58aef476-6586-47bb-bf45-dbeccac6271a/kube-scheduler-operator-container/1.log" Oct 11 10:29:27.637476 master-2 kubenswrapper[4776]: I1011 10:29:27.637389 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq"] Oct 11 10:29:27.638160 master-2 kubenswrapper[4776]: I1011 10:29:27.638138 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:27.641053 master-2 kubenswrapper[4776]: I1011 10:29:27.641007 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Oct 11 10:29:27.644535 master-2 kubenswrapper[4776]: I1011 10:29:27.644256 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5ddb89f76-57kcw"] Oct 11 10:29:27.645877 master-2 kubenswrapper[4776]: I1011 10:29:27.645244 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.648201 master-2 kubenswrapper[4776]: I1011 10:29:27.647955 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 11 10:29:27.648455 master-2 kubenswrapper[4776]: I1011 10:29:27.648395 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 11 10:29:27.648455 master-2 kubenswrapper[4776]: I1011 10:29:27.648442 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 11 10:29:27.649035 master-2 kubenswrapper[4776]: I1011 10:29:27.648576 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 11 10:29:27.649035 master-2 kubenswrapper[4776]: I1011 10:29:27.648612 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq"] Oct 11 10:29:27.649035 master-2 kubenswrapper[4776]: I1011 10:29:27.649001 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 11 10:29:27.649035 master-2 kubenswrapper[4776]: I1011 10:29:27.649014 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 11 10:29:27.720604 master-2 kubenswrapper[4776]: I1011 10:29:27.720538 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-service-ca-bundle\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.720604 master-2 kubenswrapper[4776]: I1011 10:29:27.720598 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-metrics-certs\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.720881 master-2 kubenswrapper[4776]: I1011 10:29:27.720659 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79cz7\" (UniqueName: \"kubernetes.io/projected/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-kube-api-access-79cz7\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.720881 master-2 kubenswrapper[4776]: I1011 10:29:27.720732 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-default-certificate\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.720881 master-2 kubenswrapper[4776]: I1011 10:29:27.720788 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2ec2ac05-04f0-4170-9423-b405676995ee-tls-certificates\") pod \"prometheus-operator-admission-webhook-79d5f95f5c-tf6cq\" (UID: \"2ec2ac05-04f0-4170-9423-b405676995ee\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:27.720881 master-2 kubenswrapper[4776]: I1011 10:29:27.720839 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-stats-auth\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.821816 master-2 kubenswrapper[4776]: I1011 10:29:27.821403 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2ec2ac05-04f0-4170-9423-b405676995ee-tls-certificates\") pod \"prometheus-operator-admission-webhook-79d5f95f5c-tf6cq\" (UID: \"2ec2ac05-04f0-4170-9423-b405676995ee\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:27.821816 master-2 kubenswrapper[4776]: I1011 10:29:27.821465 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-stats-auth\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.821816 master-2 kubenswrapper[4776]: I1011 10:29:27.821508 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-service-ca-bundle\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.821816 master-2 kubenswrapper[4776]: I1011 10:29:27.821528 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-metrics-certs\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.821816 master-2 kubenswrapper[4776]: I1011 10:29:27.821551 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79cz7\" (UniqueName: \"kubernetes.io/projected/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-kube-api-access-79cz7\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.821816 master-2 kubenswrapper[4776]: I1011 10:29:27.821571 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-default-certificate\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.822973 master-2 kubenswrapper[4776]: I1011 10:29:27.822913 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-service-ca-bundle\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.825755 master-2 kubenswrapper[4776]: I1011 10:29:27.825718 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-metrics-certs\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.826107 master-2 kubenswrapper[4776]: I1011 10:29:27.826075 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/2ec2ac05-04f0-4170-9423-b405676995ee-tls-certificates\") pod \"prometheus-operator-admission-webhook-79d5f95f5c-tf6cq\" (UID: \"2ec2ac05-04f0-4170-9423-b405676995ee\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:27.826760 master-2 kubenswrapper[4776]: I1011 10:29:27.826722 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-default-certificate\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.833054 master-2 kubenswrapper[4776]: I1011 10:29:27.833012 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-stats-auth\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.845320 master-2 kubenswrapper[4776]: I1011 10:29:27.845256 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79cz7\" (UniqueName: \"kubernetes.io/projected/c8cd90ff-e70c-4837-82c4-0fec67a8a51b-kube-api-access-79cz7\") pod \"router-default-5ddb89f76-57kcw\" (UID: \"c8cd90ff-e70c-4837-82c4-0fec67a8a51b\") " pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:27.957078 master-2 kubenswrapper[4776]: I1011 10:29:27.957016 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:27.966211 master-2 kubenswrapper[4776]: I1011 10:29:27.966173 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:28.006936 master-2 kubenswrapper[4776]: W1011 10:29:28.006900 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8cd90ff_e70c_4837_82c4_0fec67a8a51b.slice/crio-7c323050fd888edd84eebd02fed987cdc12b328bcca0b9ae079b6c745bcff1a9 WatchSource:0}: Error finding container 7c323050fd888edd84eebd02fed987cdc12b328bcca0b9ae079b6c745bcff1a9: Status 404 returned error can't find the container with id 7c323050fd888edd84eebd02fed987cdc12b328bcca0b9ae079b6c745bcff1a9 Oct 11 10:29:28.103223 master-2 kubenswrapper[4776]: I1011 10:29:28.103102 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerStarted","Data":"7c323050fd888edd84eebd02fed987cdc12b328bcca0b9ae079b6c745bcff1a9"} Oct 11 10:29:28.386769 master-2 kubenswrapper[4776]: I1011 10:29:28.381055 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq"] Oct 11 10:29:28.387442 master-2 kubenswrapper[4776]: W1011 10:29:28.387187 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ec2ac05_04f0_4170_9423_b405676995ee.slice/crio-7984ff7be16de27e1c0ded96341501037f0da16732811ea61514bf163a918358 WatchSource:0}: Error finding container 7984ff7be16de27e1c0ded96341501037f0da16732811ea61514bf163a918358: Status 404 returned error can't find the container with id 7984ff7be16de27e1c0ded96341501037f0da16732811ea61514bf163a918358 Oct 11 10:29:28.983060 master-2 kubenswrapper[4776]: I1011 10:29:28.983018 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/cluster-olm-operator/0.log" Oct 11 10:29:29.109400 master-2 kubenswrapper[4776]: I1011 10:29:29.109339 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" event={"ID":"2ec2ac05-04f0-4170-9423-b405676995ee","Type":"ContainerStarted","Data":"7984ff7be16de27e1c0ded96341501037f0da16732811ea61514bf163a918358"} Oct 11 10:29:29.178204 master-2 kubenswrapper[4776]: I1011 10:29:29.178150 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/copy-catalogd-manifests/0.log" Oct 11 10:29:29.379770 master-2 kubenswrapper[4776]: I1011 10:29:29.379656 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/copy-operator-controller-manifests/0.log" Oct 11 10:29:29.519313 master-2 kubenswrapper[4776]: I1011 10:29:29.518362 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tpjwk"] Oct 11 10:29:29.519863 master-2 kubenswrapper[4776]: I1011 10:29:29.519427 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.522682 master-2 kubenswrapper[4776]: I1011 10:29:29.522637 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 11 10:29:29.522915 master-2 kubenswrapper[4776]: I1011 10:29:29.522853 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 11 10:29:29.584930 master-2 kubenswrapper[4776]: I1011 10:29:29.584894 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/cluster-olm-operator/1.log" Oct 11 10:29:29.645193 master-2 kubenswrapper[4776]: I1011 10:29:29.645070 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3594e65d-a9cb-4d12-b4cd-88229b18abdc-node-bootstrap-token\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.645193 master-2 kubenswrapper[4776]: I1011 10:29:29.645133 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95b9q\" (UniqueName: \"kubernetes.io/projected/3594e65d-a9cb-4d12-b4cd-88229b18abdc-kube-api-access-95b9q\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.645193 master-2 kubenswrapper[4776]: I1011 10:29:29.645157 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3594e65d-a9cb-4d12-b4cd-88229b18abdc-certs\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.745922 master-2 kubenswrapper[4776]: I1011 10:29:29.745870 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95b9q\" (UniqueName: \"kubernetes.io/projected/3594e65d-a9cb-4d12-b4cd-88229b18abdc-kube-api-access-95b9q\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.745922 master-2 kubenswrapper[4776]: I1011 10:29:29.745922 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3594e65d-a9cb-4d12-b4cd-88229b18abdc-certs\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.746185 master-2 kubenswrapper[4776]: I1011 10:29:29.745993 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3594e65d-a9cb-4d12-b4cd-88229b18abdc-node-bootstrap-token\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.749790 master-2 kubenswrapper[4776]: I1011 10:29:29.749766 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3594e65d-a9cb-4d12-b4cd-88229b18abdc-node-bootstrap-token\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.749872 master-2 kubenswrapper[4776]: I1011 10:29:29.749806 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3594e65d-a9cb-4d12-b4cd-88229b18abdc-certs\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.767238 master-2 kubenswrapper[4776]: I1011 10:29:29.767185 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95b9q\" (UniqueName: \"kubernetes.io/projected/3594e65d-a9cb-4d12-b4cd-88229b18abdc-kube-api-access-95b9q\") pod \"machine-config-server-tpjwk\" (UID: \"3594e65d-a9cb-4d12-b4cd-88229b18abdc\") " pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.787195 master-2 kubenswrapper[4776]: I1011 10:29:29.787152 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7d88655794-7jd4q_f8050d30-444b-40a5-829c-1e3b788910a0/openshift-apiserver-operator/0.log" Oct 11 10:29:29.834275 master-2 kubenswrapper[4776]: I1011 10:29:29.834210 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tpjwk" Oct 11 10:29:29.877883 master-2 kubenswrapper[4776]: W1011 10:29:29.877839 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3594e65d_a9cb_4d12_b4cd_88229b18abdc.slice/crio-a4faf49c97588c7f69baf4c5f6ff4f277750df6632d3e7e041026675c3ce1864 WatchSource:0}: Error finding container a4faf49c97588c7f69baf4c5f6ff4f277750df6632d3e7e041026675c3ce1864: Status 404 returned error can't find the container with id a4faf49c97588c7f69baf4c5f6ff4f277750df6632d3e7e041026675c3ce1864 Oct 11 10:29:30.117158 master-2 kubenswrapper[4776]: I1011 10:29:30.117108 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerStarted","Data":"d0cb5ca87b019c6dd7de016a32a463f61a07f9fbd819c59a6476d827605ded9c"} Oct 11 10:29:30.120028 master-2 kubenswrapper[4776]: I1011 10:29:30.119179 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tpjwk" event={"ID":"3594e65d-a9cb-4d12-b4cd-88229b18abdc","Type":"ContainerStarted","Data":"1df183bde159ebd53360d83dbc6dec8b3f26092aec2e036570774307ae38d932"} Oct 11 10:29:30.120028 master-2 kubenswrapper[4776]: I1011 10:29:30.119235 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tpjwk" event={"ID":"3594e65d-a9cb-4d12-b4cd-88229b18abdc","Type":"ContainerStarted","Data":"a4faf49c97588c7f69baf4c5f6ff4f277750df6632d3e7e041026675c3ce1864"} Oct 11 10:29:30.123761 master-2 kubenswrapper[4776]: I1011 10:29:30.120865 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" event={"ID":"2ec2ac05-04f0-4170-9423-b405676995ee","Type":"ContainerStarted","Data":"f930b9c6be9d1302318e869655554571eb051f5a4e26ab4627da1ce4a1e858d8"} Oct 11 10:29:30.123761 master-2 kubenswrapper[4776]: I1011 10:29:30.121164 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:30.126360 master-2 kubenswrapper[4776]: I1011 10:29:30.126224 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" Oct 11 10:29:30.138720 master-2 kubenswrapper[4776]: I1011 10:29:30.138609 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podStartSLOduration=41.278663674 podStartE2EDuration="43.138590567s" podCreationTimestamp="2025-10-11 10:28:47 +0000 UTC" firstStartedPulling="2025-10-11 10:29:28.008420155 +0000 UTC m=+202.792846864" lastFinishedPulling="2025-10-11 10:29:29.868347048 +0000 UTC m=+204.652773757" observedRunningTime="2025-10-11 10:29:30.137854387 +0000 UTC m=+204.922281136" watchObservedRunningTime="2025-10-11 10:29:30.138590567 +0000 UTC m=+204.923017276" Oct 11 10:29:30.160032 master-2 kubenswrapper[4776]: I1011 10:29:30.159397 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq" podStartSLOduration=7.677467215 podStartE2EDuration="9.159365621s" podCreationTimestamp="2025-10-11 10:29:21 +0000 UTC" firstStartedPulling="2025-10-11 10:29:28.390374108 +0000 UTC m=+203.174800827" lastFinishedPulling="2025-10-11 10:29:29.872272524 +0000 UTC m=+204.656699233" observedRunningTime="2025-10-11 10:29:30.159178055 +0000 UTC m=+204.943604814" watchObservedRunningTime="2025-10-11 10:29:30.159365621 +0000 UTC m=+204.943792380" Oct 11 10:29:30.195663 master-2 kubenswrapper[4776]: I1011 10:29:30.195593 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tpjwk" podStartSLOduration=1.195574384 podStartE2EDuration="1.195574384s" podCreationTimestamp="2025-10-11 10:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:30.194248368 +0000 UTC m=+204.978675077" watchObservedRunningTime="2025-10-11 10:29:30.195574384 +0000 UTC m=+204.980001093" Oct 11 10:29:30.578440 master-2 kubenswrapper[4776]: I1011 10:29:30.578332 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-wmcqt_7a89cb41-fb97-4282-a12d-c6f8d87bc41e/fix-audit-permissions/0.log" Oct 11 10:29:30.781849 master-2 kubenswrapper[4776]: I1011 10:29:30.781780 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-wmcqt_7a89cb41-fb97-4282-a12d-c6f8d87bc41e/openshift-apiserver/0.log" Oct 11 10:29:30.967833 master-2 kubenswrapper[4776]: I1011 10:29:30.967788 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:30.970371 master-2 kubenswrapper[4776]: I1011 10:29:30.970336 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:30.970371 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:30.970371 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:30.970371 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:30.970510 master-2 kubenswrapper[4776]: I1011 10:29:30.970383 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:30.981634 master-2 kubenswrapper[4776]: I1011 10:29:30.980715 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-wmcqt_7a89cb41-fb97-4282-a12d-c6f8d87bc41e/openshift-apiserver-check-endpoints/0.log" Oct 11 10:29:31.180512 master-2 kubenswrapper[4776]: I1011 10:29:31.180467 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/0.log" Oct 11 10:29:31.385614 master-2 kubenswrapper[4776]: I1011 10:29:31.385496 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-bq4rs_88129ec6-6f99-42a1-842a-6a965c6b58fe/openshift-controller-manager-operator/0.log" Oct 11 10:29:31.968978 master-2 kubenswrapper[4776]: I1011 10:29:31.968912 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:31.968978 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:31.968978 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:31.968978 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:31.968978 master-2 kubenswrapper[4776]: I1011 10:29:31.968960 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:32.236360 master-2 kubenswrapper[4776]: I1011 10:29:32.236260 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc"] Oct 11 10:29:32.237000 master-2 kubenswrapper[4776]: I1011 10:29:32.236982 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.239407 master-2 kubenswrapper[4776]: I1011 10:29:32.239270 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Oct 11 10:29:32.241098 master-2 kubenswrapper[4776]: I1011 10:29:32.241056 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Oct 11 10:29:32.241098 master-2 kubenswrapper[4776]: I1011 10:29:32.241089 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Oct 11 10:29:32.241224 master-2 kubenswrapper[4776]: I1011 10:29:32.241074 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc"] Oct 11 10:29:32.271237 master-2 kubenswrapper[4776]: I1011 10:29:32.271202 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-tls\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.271410 master-2 kubenswrapper[4776]: I1011 10:29:32.271267 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.271410 master-2 kubenswrapper[4776]: I1011 10:29:32.271315 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-metrics-client-ca\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.271410 master-2 kubenswrapper[4776]: I1011 10:29:32.271362 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndq25\" (UniqueName: \"kubernetes.io/projected/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-kube-api-access-ndq25\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.372388 master-2 kubenswrapper[4776]: I1011 10:29:32.372310 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndq25\" (UniqueName: \"kubernetes.io/projected/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-kube-api-access-ndq25\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.372610 master-2 kubenswrapper[4776]: I1011 10:29:32.372412 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-tls\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.372610 master-2 kubenswrapper[4776]: I1011 10:29:32.372470 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.373135 master-2 kubenswrapper[4776]: I1011 10:29:32.373107 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-metrics-client-ca\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.373195 master-2 kubenswrapper[4776]: E1011 10:29:32.372627 4776 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Oct 11 10:29:32.373280 master-2 kubenswrapper[4776]: E1011 10:29:32.373243 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-tls podName:d59f55bb-61cf-47d6-b57b-6b02c1cf3b60 nodeName:}" failed. No retries permitted until 2025-10-11 10:29:32.873221556 +0000 UTC m=+207.657648265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-tls") pod "prometheus-operator-574d7f8db8-cwbcc" (UID: "d59f55bb-61cf-47d6-b57b-6b02c1cf3b60") : secret "prometheus-operator-tls" not found Oct 11 10:29:32.373922 master-2 kubenswrapper[4776]: I1011 10:29:32.373901 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-metrics-client-ca\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.456122 master-2 kubenswrapper[4776]: I1011 10:29:32.456043 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndq25\" (UniqueName: \"kubernetes.io/projected/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-kube-api-access-ndq25\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.456400 master-2 kubenswrapper[4776]: I1011 10:29:32.456351 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.878415 master-2 kubenswrapper[4776]: I1011 10:29:32.878300 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-tls\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.894167 master-2 kubenswrapper[4776]: I1011 10:29:32.894112 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/d59f55bb-61cf-47d6-b57b-6b02c1cf3b60-prometheus-operator-tls\") pod \"prometheus-operator-574d7f8db8-cwbcc\" (UID: \"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60\") " pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:32.971705 master-2 kubenswrapper[4776]: I1011 10:29:32.971091 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:32.971705 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:32.971705 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:32.971705 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:32.972225 master-2 kubenswrapper[4776]: I1011 10:29:32.971724 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:33.135016 master-2 kubenswrapper[4776]: I1011 10:29:33.134968 4776 generic.go:334] "Generic (PLEG): container finished" podID="89e02bcb-b3fe-4a45-a531-4ab41d8ee424" containerID="2311ef45db3839160f50cf52dfc54b1dab3ed31b8a810ff4165ecab2eb84274b" exitCode=0 Oct 11 10:29:33.135095 master-2 kubenswrapper[4776]: I1011 10:29:33.135051 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" event={"ID":"89e02bcb-b3fe-4a45-a531-4ab41d8ee424","Type":"ContainerDied","Data":"2311ef45db3839160f50cf52dfc54b1dab3ed31b8a810ff4165ecab2eb84274b"} Oct 11 10:29:33.135641 master-2 kubenswrapper[4776]: I1011 10:29:33.135614 4776 scope.go:117] "RemoveContainer" containerID="2311ef45db3839160f50cf52dfc54b1dab3ed31b8a810ff4165ecab2eb84274b" Oct 11 10:29:33.138507 master-2 kubenswrapper[4776]: I1011 10:29:33.138467 4776 generic.go:334] "Generic (PLEG): container finished" podID="05cf2994-c049-4f42-b2d8-83b23e7e763a" containerID="09c81be857efc36ce4d8daa4c934f1437649343613c3da9aac63b8db86978ed6" exitCode=0 Oct 11 10:29:33.138507 master-2 kubenswrapper[4776]: I1011 10:29:33.138504 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" event={"ID":"05cf2994-c049-4f42-b2d8-83b23e7e763a","Type":"ContainerDied","Data":"09c81be857efc36ce4d8daa4c934f1437649343613c3da9aac63b8db86978ed6"} Oct 11 10:29:33.138886 master-2 kubenswrapper[4776]: I1011 10:29:33.138862 4776 scope.go:117] "RemoveContainer" containerID="09c81be857efc36ce4d8daa4c934f1437649343613c3da9aac63b8db86978ed6" Oct 11 10:29:33.155334 master-2 kubenswrapper[4776]: I1011 10:29:33.155273 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" Oct 11 10:29:33.970043 master-2 kubenswrapper[4776]: I1011 10:29:33.969975 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:33.970043 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:33.970043 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:33.970043 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:33.970379 master-2 kubenswrapper[4776]: I1011 10:29:33.970060 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:34.143411 master-2 kubenswrapper[4776]: I1011 10:29:34.143323 4776 generic.go:334] "Generic (PLEG): container finished" podID="6967590c-695e-4e20-964b-0c643abdf367" containerID="e3b061ce9d0eb2a283f25a5377c1ec78f61f62a5f692f1f7bc57aa0c47f8c828" exitCode=0 Oct 11 10:29:34.143411 master-2 kubenswrapper[4776]: I1011 10:29:34.143355 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" event={"ID":"6967590c-695e-4e20-964b-0c643abdf367","Type":"ContainerDied","Data":"e3b061ce9d0eb2a283f25a5377c1ec78f61f62a5f692f1f7bc57aa0c47f8c828"} Oct 11 10:29:34.144213 master-2 kubenswrapper[4776]: I1011 10:29:34.143731 4776 scope.go:117] "RemoveContainer" containerID="e3b061ce9d0eb2a283f25a5377c1ec78f61f62a5f692f1f7bc57aa0c47f8c828" Oct 11 10:29:34.144927 master-2 kubenswrapper[4776]: I1011 10:29:34.144894 4776 generic.go:334] "Generic (PLEG): container finished" podID="e487f283-7482-463c-90b6-a812e00d0e35" containerID="75e09f57e9f3d2d1f9408b0cb83b216f2432311fdbe734afce1ac9bd82b32464" exitCode=0 Oct 11 10:29:34.144927 master-2 kubenswrapper[4776]: I1011 10:29:34.144921 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" event={"ID":"e487f283-7482-463c-90b6-a812e00d0e35","Type":"ContainerDied","Data":"75e09f57e9f3d2d1f9408b0cb83b216f2432311fdbe734afce1ac9bd82b32464"} Oct 11 10:29:34.145189 master-2 kubenswrapper[4776]: I1011 10:29:34.145134 4776 scope.go:117] "RemoveContainer" containerID="75e09f57e9f3d2d1f9408b0cb83b216f2432311fdbe734afce1ac9bd82b32464" Oct 11 10:29:34.952868 master-2 kubenswrapper[4776]: I1011 10:29:34.952664 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc"] Oct 11 10:29:34.969509 master-2 kubenswrapper[4776]: I1011 10:29:34.969476 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:34.969509 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:34.969509 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:34.969509 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:34.969803 master-2 kubenswrapper[4776]: I1011 10:29:34.969527 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:35.160263 master-2 kubenswrapper[4776]: I1011 10:29:35.160214 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz" event={"ID":"89e02bcb-b3fe-4a45-a531-4ab41d8ee424","Type":"ContainerStarted","Data":"04593c9a8e4176a6b2a0cc6b66798cfe315899c4cef7952e21b368edd8e44a59"} Oct 11 10:29:35.162011 master-2 kubenswrapper[4776]: I1011 10:29:35.161975 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-568c655666-84cp8" event={"ID":"05cf2994-c049-4f42-b2d8-83b23e7e763a","Type":"ContainerStarted","Data":"d42f6d5a7ad6d1dda7a68f7de1dc7e076bddcd07acb9d4de53e05e48fc3a150f"} Oct 11 10:29:35.163896 master-2 kubenswrapper[4776]: I1011 10:29:35.163843 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" event={"ID":"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60","Type":"ContainerStarted","Data":"37956e3fb55cda8feb4fe7a4112049c9a8dffa4e87d666c5864d55b4c361351f"} Oct 11 10:29:35.169711 master-2 kubenswrapper[4776]: I1011 10:29:35.169659 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77" event={"ID":"e487f283-7482-463c-90b6-a812e00d0e35","Type":"ContainerStarted","Data":"66ac492f9bc499fbe2cdd855b8a45b0ac86e5a06c95abb9a4fee261a78d012fb"} Oct 11 10:29:35.171844 master-2 kubenswrapper[4776]: I1011 10:29:35.171822 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerStarted","Data":"2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab"} Oct 11 10:29:35.178893 master-2 kubenswrapper[4776]: I1011 10:29:35.178401 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv" event={"ID":"6967590c-695e-4e20-964b-0c643abdf367","Type":"ContainerStarted","Data":"2f0c2ef6b9765f3091f3311664d99563342d78a3080dd836513b6906718f0fd5"} Oct 11 10:29:35.623426 master-2 kubenswrapper[4776]: I1011 10:29:35.623304 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:29:35.623426 master-2 kubenswrapper[4776]: E1011 10:29:35.623404 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:35.623653 master-2 kubenswrapper[4776]: E1011 10:29:35.623449 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:30:39.623436497 +0000 UTC m=+274.407863206 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:29:35.970293 master-2 kubenswrapper[4776]: I1011 10:29:35.970235 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:35.970293 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:35.970293 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:35.970293 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:35.970554 master-2 kubenswrapper[4776]: I1011 10:29:35.970317 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:36.185724 master-2 kubenswrapper[4776]: I1011 10:29:36.184783 4776 generic.go:334] "Generic (PLEG): container finished" podID="9d362fb9-48e4-4d72-a940-ec6c9c051fac" containerID="53f868ac8b0d0bcce62eb761aa1d944f2aeff4c3ea9d582cec7865a78d5991fa" exitCode=0 Oct 11 10:29:36.185724 master-2 kubenswrapper[4776]: I1011 10:29:36.184836 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" event={"ID":"9d362fb9-48e4-4d72-a940-ec6c9c051fac","Type":"ContainerDied","Data":"53f868ac8b0d0bcce62eb761aa1d944f2aeff4c3ea9d582cec7865a78d5991fa"} Oct 11 10:29:36.185724 master-2 kubenswrapper[4776]: I1011 10:29:36.185223 4776 scope.go:117] "RemoveContainer" containerID="53f868ac8b0d0bcce62eb761aa1d944f2aeff4c3ea9d582cec7865a78d5991fa" Oct 11 10:29:36.190226 master-2 kubenswrapper[4776]: I1011 10:29:36.188997 4776 generic.go:334] "Generic (PLEG): container finished" podID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerID="2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab" exitCode=0 Oct 11 10:29:36.190226 master-2 kubenswrapper[4776]: I1011 10:29:36.189022 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerDied","Data":"2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab"} Oct 11 10:29:36.970658 master-2 kubenswrapper[4776]: I1011 10:29:36.970575 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:36.970658 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:36.970658 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:36.970658 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:36.971097 master-2 kubenswrapper[4776]: I1011 10:29:36.970711 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:37.199972 master-2 kubenswrapper[4776]: I1011 10:29:37.199877 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerStarted","Data":"79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a"} Oct 11 10:29:37.209779 master-2 kubenswrapper[4776]: I1011 10:29:37.209657 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" event={"ID":"9d362fb9-48e4-4d72-a940-ec6c9c051fac","Type":"ContainerStarted","Data":"1b4a8ced5ca6a681439ab9f258f847eef6764901729df9db483938353402fc3b"} Oct 11 10:29:37.210361 master-2 kubenswrapper[4776]: I1011 10:29:37.210317 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:29:37.213285 master-2 kubenswrapper[4776]: I1011 10:29:37.213238 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" event={"ID":"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60","Type":"ContainerStarted","Data":"659605d0ecdd890d98bc3c30bed35c1685bb6e09336e89d7e95ffc3574457f60"} Oct 11 10:29:37.213285 master-2 kubenswrapper[4776]: I1011 10:29:37.213283 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" event={"ID":"d59f55bb-61cf-47d6-b57b-6b02c1cf3b60","Type":"ContainerStarted","Data":"c1cea60d20b8b7bbe3f7f6818ae17857d3e5b363e53892761bddf5afe67f98a5"} Oct 11 10:29:37.224766 master-2 kubenswrapper[4776]: I1011 10:29:37.223016 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mwqr6" podStartSLOduration=2.680106029 podStartE2EDuration="13.223003089s" podCreationTimestamp="2025-10-11 10:29:24 +0000 UTC" firstStartedPulling="2025-10-11 10:29:26.089501059 +0000 UTC m=+200.873927798" lastFinishedPulling="2025-10-11 10:29:36.632398149 +0000 UTC m=+211.416824858" observedRunningTime="2025-10-11 10:29:37.222818314 +0000 UTC m=+212.007245063" watchObservedRunningTime="2025-10-11 10:29:37.223003089 +0000 UTC m=+212.007429798" Oct 11 10:29:37.246953 master-2 kubenswrapper[4776]: I1011 10:29:37.246845 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc" podStartSLOduration=3.567997012 podStartE2EDuration="5.246817307s" podCreationTimestamp="2025-10-11 10:29:32 +0000 UTC" firstStartedPulling="2025-10-11 10:29:34.960468342 +0000 UTC m=+209.744895051" lastFinishedPulling="2025-10-11 10:29:36.639288637 +0000 UTC m=+211.423715346" observedRunningTime="2025-10-11 10:29:37.243757973 +0000 UTC m=+212.028184682" watchObservedRunningTime="2025-10-11 10:29:37.246817307 +0000 UTC m=+212.031244056" Oct 11 10:29:37.967242 master-2 kubenswrapper[4776]: I1011 10:29:37.967144 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:29:37.969418 master-2 kubenswrapper[4776]: I1011 10:29:37.969371 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:37.969418 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:37.969418 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:37.969418 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:37.969720 master-2 kubenswrapper[4776]: I1011 10:29:37.969468 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:38.219980 master-2 kubenswrapper[4776]: I1011 10:29:38.219797 4776 generic.go:334] "Generic (PLEG): container finished" podID="7004f3ff-6db8-446d-94c1-1223e975299d" containerID="13931b8d42a71308bf45f4bd6921b1ab789c1a4f3b0b726209cf504aecb722a9" exitCode=0 Oct 11 10:29:38.220803 master-2 kubenswrapper[4776]: I1011 10:29:38.219981 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" event={"ID":"7004f3ff-6db8-446d-94c1-1223e975299d","Type":"ContainerDied","Data":"13931b8d42a71308bf45f4bd6921b1ab789c1a4f3b0b726209cf504aecb722a9"} Oct 11 10:29:38.220986 master-2 kubenswrapper[4776]: I1011 10:29:38.220917 4776 scope.go:117] "RemoveContainer" containerID="13931b8d42a71308bf45f4bd6921b1ab789c1a4f3b0b726209cf504aecb722a9" Oct 11 10:29:38.563522 master-2 kubenswrapper[4776]: I1011 10:29:38.563365 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:29:38.563522 master-2 kubenswrapper[4776]: E1011 10:29:38.563512 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:29:38.563908 master-2 kubenswrapper[4776]: E1011 10:29:38.563578 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:30:42.563560918 +0000 UTC m=+277.347987627 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:29:38.970338 master-2 kubenswrapper[4776]: I1011 10:29:38.969702 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:38.970338 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:38.970338 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:38.970338 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:38.970338 master-2 kubenswrapper[4776]: I1011 10:29:38.969791 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:39.227958 master-2 kubenswrapper[4776]: I1011 10:29:39.227790 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc" event={"ID":"7004f3ff-6db8-446d-94c1-1223e975299d","Type":"ContainerStarted","Data":"9d576122c332ab9836c7092653bba3fb6cc3dc9cf6006fb55fc126faded0454e"} Oct 11 10:29:39.230932 master-2 kubenswrapper[4776]: I1011 10:29:39.230842 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-bq4rs_88129ec6-6f99-42a1-842a-6a965c6b58fe/openshift-controller-manager-operator/0.log" Oct 11 10:29:39.230932 master-2 kubenswrapper[4776]: I1011 10:29:39.230899 4776 generic.go:334] "Generic (PLEG): container finished" podID="88129ec6-6f99-42a1-842a-6a965c6b58fe" containerID="e266c7a2e3d240b36e8aa83f32c98d86c0362e7f150797bd2e151f66b7e2430e" exitCode=1 Oct 11 10:29:39.230932 master-2 kubenswrapper[4776]: I1011 10:29:39.230932 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" event={"ID":"88129ec6-6f99-42a1-842a-6a965c6b58fe","Type":"ContainerDied","Data":"e266c7a2e3d240b36e8aa83f32c98d86c0362e7f150797bd2e151f66b7e2430e"} Oct 11 10:29:39.231539 master-2 kubenswrapper[4776]: I1011 10:29:39.231340 4776 scope.go:117] "RemoveContainer" containerID="e266c7a2e3d240b36e8aa83f32c98d86c0362e7f150797bd2e151f66b7e2430e" Oct 11 10:29:39.969990 master-2 kubenswrapper[4776]: I1011 10:29:39.969903 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:39.969990 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:39.969990 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:39.969990 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:39.971011 master-2 kubenswrapper[4776]: I1011 10:29:39.970093 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:40.240609 master-2 kubenswrapper[4776]: I1011 10:29:40.240473 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-bq4rs_88129ec6-6f99-42a1-842a-6a965c6b58fe/openshift-controller-manager-operator/0.log" Oct 11 10:29:40.240609 master-2 kubenswrapper[4776]: I1011 10:29:40.240538 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs" event={"ID":"88129ec6-6f99-42a1-842a-6a965c6b58fe","Type":"ContainerStarted","Data":"b6a7c367c0b1c516d9da1e5da56e626bdc4db09395e1cc1c9318830bf75af8ca"} Oct 11 10:29:40.968915 master-2 kubenswrapper[4776]: I1011 10:29:40.968822 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:40.968915 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:40.968915 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:40.968915 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:40.969263 master-2 kubenswrapper[4776]: I1011 10:29:40.968935 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:41.340166 master-2 kubenswrapper[4776]: I1011 10:29:41.340012 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7" Oct 11 10:29:41.969924 master-2 kubenswrapper[4776]: I1011 10:29:41.969844 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:41.969924 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:41.969924 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:41.969924 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:41.970185 master-2 kubenswrapper[4776]: I1011 10:29:41.969945 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:42.970248 master-2 kubenswrapper[4776]: I1011 10:29:42.970139 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:42.970248 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:42.970248 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:42.970248 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:42.971423 master-2 kubenswrapper[4776]: I1011 10:29:42.970251 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:43.260457 master-2 kubenswrapper[4776]: I1011 10:29:43.260343 4776 generic.go:334] "Generic (PLEG): container finished" podID="f8050d30-444b-40a5-829c-1e3b788910a0" containerID="5001bd6df546d1ceae6c934b8abd9de1f6f93838b1e654bff89ff6b24eb56ca9" exitCode=0 Oct 11 10:29:43.260775 master-2 kubenswrapper[4776]: I1011 10:29:43.260472 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" event={"ID":"f8050d30-444b-40a5-829c-1e3b788910a0","Type":"ContainerDied","Data":"5001bd6df546d1ceae6c934b8abd9de1f6f93838b1e654bff89ff6b24eb56ca9"} Oct 11 10:29:43.262519 master-2 kubenswrapper[4776]: I1011 10:29:43.261244 4776 scope.go:117] "RemoveContainer" containerID="5001bd6df546d1ceae6c934b8abd9de1f6f93838b1e654bff89ff6b24eb56ca9" Oct 11 10:29:43.263446 master-2 kubenswrapper[4776]: I1011 10:29:43.262718 4776 generic.go:334] "Generic (PLEG): container finished" podID="a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1" containerID="0b7bb22c9bcc10fdcba6be60dad53b1b80998cc309ea84f073feff75133d2485" exitCode=0 Oct 11 10:29:43.263446 master-2 kubenswrapper[4776]: I1011 10:29:43.262766 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" event={"ID":"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1","Type":"ContainerDied","Data":"0b7bb22c9bcc10fdcba6be60dad53b1b80998cc309ea84f073feff75133d2485"} Oct 11 10:29:43.263446 master-2 kubenswrapper[4776]: I1011 10:29:43.263315 4776 scope.go:117] "RemoveContainer" containerID="0b7bb22c9bcc10fdcba6be60dad53b1b80998cc309ea84f073feff75133d2485" Oct 11 10:29:43.970324 master-2 kubenswrapper[4776]: I1011 10:29:43.970199 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:43.970324 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:43.970324 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:43.970324 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:43.970324 master-2 kubenswrapper[4776]: I1011 10:29:43.970271 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:44.114666 master-2 kubenswrapper[4776]: I1011 10:29:44.114604 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7tbzg"] Oct 11 10:29:44.115427 master-2 kubenswrapper[4776]: I1011 10:29:44.115386 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.118595 master-2 kubenswrapper[4776]: I1011 10:29:44.118544 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Oct 11 10:29:44.235198 master-2 kubenswrapper[4776]: I1011 10:29:44.235049 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.235198 master-2 kubenswrapper[4776]: I1011 10:29:44.235143 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.235621 master-2 kubenswrapper[4776]: I1011 10:29:44.235573 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-ready\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.235708 master-2 kubenswrapper[4776]: I1011 10:29:44.235637 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-kube-api-access-x9qsm\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.271293 master-2 kubenswrapper[4776]: I1011 10:29:44.271200 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54" event={"ID":"a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1","Type":"ContainerStarted","Data":"16105f8d068da7f4b93fbced46c0b757ac012260cda17d23e7fdedd94f1849d6"} Oct 11 10:29:44.274183 master-2 kubenswrapper[4776]: I1011 10:29:44.274134 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q" event={"ID":"f8050d30-444b-40a5-829c-1e3b788910a0","Type":"ContainerStarted","Data":"66d9d78635fc74937985f0529439e2b1341a4631940c42b266388afccee1f55f"} Oct 11 10:29:44.337785 master-2 kubenswrapper[4776]: I1011 10:29:44.336802 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-ready\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.337785 master-2 kubenswrapper[4776]: I1011 10:29:44.336858 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-kube-api-access-x9qsm\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.337785 master-2 kubenswrapper[4776]: I1011 10:29:44.336950 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.337785 master-2 kubenswrapper[4776]: I1011 10:29:44.336966 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.337785 master-2 kubenswrapper[4776]: I1011 10:29:44.337077 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.337785 master-2 kubenswrapper[4776]: I1011 10:29:44.337781 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-ready\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.338642 master-2 kubenswrapper[4776]: I1011 10:29:44.338610 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.354120 master-2 kubenswrapper[4776]: I1011 10:29:44.354068 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-kube-api-access-x9qsm\") pod \"cni-sysctl-allowlist-ds-7tbzg\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.431841 master-2 kubenswrapper[4776]: I1011 10:29:44.431760 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:44.577201 master-2 kubenswrapper[4776]: I1011 10:29:44.576983 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-x7xhm"] Oct 11 10:29:44.578043 master-2 kubenswrapper[4776]: I1011 10:29:44.577987 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.580328 master-2 kubenswrapper[4776]: I1011 10:29:44.580280 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Oct 11 10:29:44.580423 master-2 kubenswrapper[4776]: I1011 10:29:44.580334 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-57fbd47578-g6s84"] Oct 11 10:29:44.580613 master-2 kubenswrapper[4776]: I1011 10:29:44.580570 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Oct 11 10:29:44.581497 master-2 kubenswrapper[4776]: I1011 10:29:44.581458 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.584406 master-2 kubenswrapper[4776]: I1011 10:29:44.584359 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Oct 11 10:29:44.584580 master-2 kubenswrapper[4776]: I1011 10:29:44.584546 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Oct 11 10:29:44.584754 master-2 kubenswrapper[4776]: I1011 10:29:44.584720 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Oct 11 10:29:44.586583 master-2 kubenswrapper[4776]: I1011 10:29:44.586534 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-57fbd47578-g6s84"] Oct 11 10:29:44.741404 master-2 kubenswrapper[4776]: I1011 10:29:44.741328 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/2c2bfc6c-87cf-45df-8901-abe788ae6d98-volume-directive-shadow\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.741404 master-2 kubenswrapper[4776]: I1011 10:29:44.741396 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.741625 master-2 kubenswrapper[4776]: I1011 10:29:44.741433 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-sys\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741625 master-2 kubenswrapper[4776]: I1011 10:29:44.741460 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2c2bfc6c-87cf-45df-8901-abe788ae6d98-metrics-client-ca\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.741625 master-2 kubenswrapper[4776]: I1011 10:29:44.741537 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfnj\" (UniqueName: \"kubernetes.io/projected/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-api-access-mtfnj\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.741625 master-2 kubenswrapper[4776]: I1011 10:29:44.741607 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cb15485f-03bd-4281-8626-f35346cf4b0b-metrics-client-ca\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741782 master-2 kubenswrapper[4776]: I1011 10:29:44.741636 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-wtmp\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741782 master-2 kubenswrapper[4776]: I1011 10:29:44.741654 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-textfile\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741782 master-2 kubenswrapper[4776]: I1011 10:29:44.741687 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbx7g\" (UniqueName: \"kubernetes.io/projected/cb15485f-03bd-4281-8626-f35346cf4b0b-kube-api-access-sbx7g\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741782 master-2 kubenswrapper[4776]: I1011 10:29:44.741715 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-root\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741895 master-2 kubenswrapper[4776]: I1011 10:29:44.741798 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-tls\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.741895 master-2 kubenswrapper[4776]: I1011 10:29:44.741866 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.741954 master-2 kubenswrapper[4776]: I1011 10:29:44.741930 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-tls\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.741985 master-2 kubenswrapper[4776]: I1011 10:29:44.741956 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842448 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-tls\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842495 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842527 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-tls\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842546 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842578 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/2c2bfc6c-87cf-45df-8901-abe788ae6d98-volume-directive-shadow\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842598 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.842621 master-2 kubenswrapper[4776]: I1011 10:29:44.842620 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-sys\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842640 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2c2bfc6c-87cf-45df-8901-abe788ae6d98-metrics-client-ca\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842662 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfnj\" (UniqueName: \"kubernetes.io/projected/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-api-access-mtfnj\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842719 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cb15485f-03bd-4281-8626-f35346cf4b0b-metrics-client-ca\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842741 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-wtmp\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842760 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-textfile\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842778 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbx7g\" (UniqueName: \"kubernetes.io/projected/cb15485f-03bd-4281-8626-f35346cf4b0b-kube-api-access-sbx7g\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.842972 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-wtmp\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.843030 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-root\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843098 master-2 kubenswrapper[4776]: I1011 10:29:44.843098 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-root\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843471 master-2 kubenswrapper[4776]: I1011 10:29:44.843092 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cb15485f-03bd-4281-8626-f35346cf4b0b-sys\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843516 master-2 kubenswrapper[4776]: I1011 10:29:44.843495 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-textfile\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843647 master-2 kubenswrapper[4776]: I1011 10:29:44.843595 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cb15485f-03bd-4281-8626-f35346cf4b0b-metrics-client-ca\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.843647 master-2 kubenswrapper[4776]: I1011 10:29:44.843614 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/2c2bfc6c-87cf-45df-8901-abe788ae6d98-volume-directive-shadow\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.843647 master-2 kubenswrapper[4776]: I1011 10:29:44.843642 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2c2bfc6c-87cf-45df-8901-abe788ae6d98-metrics-client-ca\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.844032 master-2 kubenswrapper[4776]: I1011 10:29:44.843978 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.846601 master-2 kubenswrapper[4776]: I1011 10:29:44.846537 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.846699 master-2 kubenswrapper[4776]: I1011 10:29:44.846559 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/cb15485f-03bd-4281-8626-f35346cf4b0b-node-exporter-tls\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.847779 master-2 kubenswrapper[4776]: I1011 10:29:44.847668 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-tls\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.848085 master-2 kubenswrapper[4776]: I1011 10:29:44.848040 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.859310 master-2 kubenswrapper[4776]: I1011 10:29:44.859237 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtfnj\" (UniqueName: \"kubernetes.io/projected/2c2bfc6c-87cf-45df-8901-abe788ae6d98-kube-api-access-mtfnj\") pod \"kube-state-metrics-57fbd47578-g6s84\" (UID: \"2c2bfc6c-87cf-45df-8901-abe788ae6d98\") " pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.860605 master-2 kubenswrapper[4776]: I1011 10:29:44.860560 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbx7g\" (UniqueName: \"kubernetes.io/projected/cb15485f-03bd-4281-8626-f35346cf4b0b-kube-api-access-sbx7g\") pod \"node-exporter-x7xhm\" (UID: \"cb15485f-03bd-4281-8626-f35346cf4b0b\") " pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.908356 master-2 kubenswrapper[4776]: I1011 10:29:44.908282 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-x7xhm" Oct 11 10:29:44.922341 master-2 kubenswrapper[4776]: W1011 10:29:44.922277 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb15485f_03bd_4281_8626_f35346cf4b0b.slice/crio-2eedcac0ac5383da254695df2a43bb4e3962da313e81ad250ee06f2761d62e4c WatchSource:0}: Error finding container 2eedcac0ac5383da254695df2a43bb4e3962da313e81ad250ee06f2761d62e4c: Status 404 returned error can't find the container with id 2eedcac0ac5383da254695df2a43bb4e3962da313e81ad250ee06f2761d62e4c Oct 11 10:29:44.937696 master-2 kubenswrapper[4776]: I1011 10:29:44.937610 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" Oct 11 10:29:44.970245 master-2 kubenswrapper[4776]: I1011 10:29:44.970141 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:44.970245 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:44.970245 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:44.970245 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:44.970996 master-2 kubenswrapper[4776]: I1011 10:29:44.970247 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:45.039179 master-2 kubenswrapper[4776]: I1011 10:29:45.039140 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:45.039306 master-2 kubenswrapper[4776]: I1011 10:29:45.039213 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:45.079902 master-2 kubenswrapper[4776]: I1011 10:29:45.079859 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:45.289282 master-2 kubenswrapper[4776]: I1011 10:29:45.289226 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-x7xhm" event={"ID":"cb15485f-03bd-4281-8626-f35346cf4b0b","Type":"ContainerStarted","Data":"2eedcac0ac5383da254695df2a43bb4e3962da313e81ad250ee06f2761d62e4c"} Oct 11 10:29:45.291853 master-2 kubenswrapper[4776]: I1011 10:29:45.291795 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" event={"ID":"2ee67bf2-b525-4e43-8f3c-be748c32c8d2","Type":"ContainerStarted","Data":"cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0"} Oct 11 10:29:45.291937 master-2 kubenswrapper[4776]: I1011 10:29:45.291858 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" event={"ID":"2ee67bf2-b525-4e43-8f3c-be748c32c8d2","Type":"ContainerStarted","Data":"18658a149bb98f078235774d71bf5a21b20cf7314050d8e8690f3080b789d04a"} Oct 11 10:29:45.292096 master-2 kubenswrapper[4776]: I1011 10:29:45.292067 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:45.306963 master-2 kubenswrapper[4776]: I1011 10:29:45.306870 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" podStartSLOduration=1.306857406 podStartE2EDuration="1.306857406s" podCreationTimestamp="2025-10-11 10:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:45.305140099 +0000 UTC m=+220.089566808" watchObservedRunningTime="2025-10-11 10:29:45.306857406 +0000 UTC m=+220.091284115" Oct 11 10:29:45.314780 master-2 kubenswrapper[4776]: I1011 10:29:45.314736 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:29:45.326500 master-2 kubenswrapper[4776]: I1011 10:29:45.326453 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:29:45.342658 master-2 kubenswrapper[4776]: W1011 10:29:45.342607 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c2bfc6c_87cf_45df_8901_abe788ae6d98.slice/crio-d62a435256f887ce698cc9f644a4d6f03bf9dfd34c32363942434fec8c556ad7 WatchSource:0}: Error finding container d62a435256f887ce698cc9f644a4d6f03bf9dfd34c32363942434fec8c556ad7: Status 404 returned error can't find the container with id d62a435256f887ce698cc9f644a4d6f03bf9dfd34c32363942434fec8c556ad7 Oct 11 10:29:45.343470 master-2 kubenswrapper[4776]: I1011 10:29:45.343427 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-57fbd47578-g6s84"] Oct 11 10:29:45.970732 master-2 kubenswrapper[4776]: I1011 10:29:45.970624 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:45.970732 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:45.970732 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:45.970732 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:45.970732 master-2 kubenswrapper[4776]: I1011 10:29:45.970725 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:46.297837 master-2 kubenswrapper[4776]: I1011 10:29:46.297578 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" event={"ID":"2c2bfc6c-87cf-45df-8901-abe788ae6d98","Type":"ContainerStarted","Data":"d62a435256f887ce698cc9f644a4d6f03bf9dfd34c32363942434fec8c556ad7"} Oct 11 10:29:46.974703 master-2 kubenswrapper[4776]: I1011 10:29:46.974560 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:46.974703 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:46.974703 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:46.974703 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:46.975556 master-2 kubenswrapper[4776]: I1011 10:29:46.974721 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:47.113096 master-2 kubenswrapper[4776]: I1011 10:29:47.113013 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7tbzg"] Oct 11 10:29:47.778365 master-2 kubenswrapper[4776]: I1011 10:29:47.778319 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:29:47.969432 master-2 kubenswrapper[4776]: I1011 10:29:47.969297 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:47.969432 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:47.969432 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:47.969432 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:47.969922 master-2 kubenswrapper[4776]: I1011 10:29:47.969426 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:48.307046 master-2 kubenswrapper[4776]: I1011 10:29:48.306944 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" gracePeriod=30 Oct 11 10:29:48.969153 master-2 kubenswrapper[4776]: I1011 10:29:48.969094 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:48.969153 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:48.969153 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:48.969153 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:48.969531 master-2 kubenswrapper[4776]: I1011 10:29:48.969164 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:49.922477 master-2 kubenswrapper[4776]: I1011 10:29:49.922422 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-crzgp"] Oct 11 10:29:49.923187 master-2 kubenswrapper[4776]: I1011 10:29:49.923157 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:49.926080 master-2 kubenswrapper[4776]: I1011 10:29:49.926044 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Oct 11 10:29:49.927043 master-2 kubenswrapper[4776]: I1011 10:29:49.927019 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Oct 11 10:29:49.927115 master-2 kubenswrapper[4776]: I1011 10:29:49.927046 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ap7ej74ueigk4" Oct 11 10:29:49.927211 master-2 kubenswrapper[4776]: I1011 10:29:49.927191 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Oct 11 10:29:49.927260 master-2 kubenswrapper[4776]: I1011 10:29:49.927206 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Oct 11 10:29:49.960055 master-2 kubenswrapper[4776]: I1011 10:29:49.959994 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-crzgp"] Oct 11 10:29:49.968836 master-2 kubenswrapper[4776]: I1011 10:29:49.968788 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:49.968836 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:49.968836 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:49.968836 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:49.969097 master-2 kubenswrapper[4776]: I1011 10:29:49.968849 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:50.117264 master-2 kubenswrapper[4776]: I1011 10:29:50.117199 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-client-certs\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.117493 master-2 kubenswrapper[4776]: I1011 10:29:50.117466 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vd6\" (UniqueName: \"kubernetes.io/projected/5473628e-94c8-4706-bb03-ff4836debe5f-kube-api-access-q2vd6\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.117541 master-2 kubenswrapper[4776]: I1011 10:29:50.117503 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-server-tls\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.117583 master-2 kubenswrapper[4776]: I1011 10:29:50.117546 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-metrics-server-audit-profiles\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.117625 master-2 kubenswrapper[4776]: I1011 10:29:50.117589 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.117779 master-2 kubenswrapper[4776]: I1011 10:29:50.117648 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.117779 master-2 kubenswrapper[4776]: I1011 10:29:50.117699 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5473628e-94c8-4706-bb03-ff4836debe5f-audit-log\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219019 master-2 kubenswrapper[4776]: I1011 10:29:50.218885 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2vd6\" (UniqueName: \"kubernetes.io/projected/5473628e-94c8-4706-bb03-ff4836debe5f-kube-api-access-q2vd6\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219019 master-2 kubenswrapper[4776]: I1011 10:29:50.218946 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-server-tls\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219312 master-2 kubenswrapper[4776]: I1011 10:29:50.219270 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-metrics-server-audit-profiles\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219354 master-2 kubenswrapper[4776]: I1011 10:29:50.219328 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219387 master-2 kubenswrapper[4776]: I1011 10:29:50.219377 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219418 master-2 kubenswrapper[4776]: I1011 10:29:50.219405 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5473628e-94c8-4706-bb03-ff4836debe5f-audit-log\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.219460 master-2 kubenswrapper[4776]: I1011 10:29:50.219444 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-client-certs\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.220241 master-2 kubenswrapper[4776]: I1011 10:29:50.220210 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-metrics-server-audit-profiles\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.220592 master-2 kubenswrapper[4776]: I1011 10:29:50.220564 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5473628e-94c8-4706-bb03-ff4836debe5f-audit-log\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.220849 master-2 kubenswrapper[4776]: I1011 10:29:50.220806 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.222417 master-2 kubenswrapper[4776]: I1011 10:29:50.222389 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-client-certs\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.223720 master-2 kubenswrapper[4776]: I1011 10:29:50.223695 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.224653 master-2 kubenswrapper[4776]: I1011 10:29:50.224623 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-server-tls\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.240148 master-2 kubenswrapper[4776]: I1011 10:29:50.240103 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2vd6\" (UniqueName: \"kubernetes.io/projected/5473628e-94c8-4706-bb03-ff4836debe5f-kube-api-access-q2vd6\") pod \"metrics-server-65d86dff78-crzgp\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.259654 master-2 kubenswrapper[4776]: I1011 10:29:50.259609 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:50.969896 master-2 kubenswrapper[4776]: I1011 10:29:50.969806 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:50.969896 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:50.969896 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:50.969896 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:50.971012 master-2 kubenswrapper[4776]: I1011 10:29:50.969912 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:51.955887 master-2 kubenswrapper[4776]: I1011 10:29:51.955824 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-crzgp"] Oct 11 10:29:51.969425 master-2 kubenswrapper[4776]: I1011 10:29:51.969383 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:51.969425 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:51.969425 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:51.969425 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:51.969585 master-2 kubenswrapper[4776]: I1011 10:29:51.969439 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:51.974245 master-2 kubenswrapper[4776]: W1011 10:29:51.974206 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5473628e_94c8_4706_bb03_ff4836debe5f.slice/crio-8271b5e67a2ec2732b55bb993f1b0ac7f455b4b07969cedd192379e7f4289e39 WatchSource:0}: Error finding container 8271b5e67a2ec2732b55bb993f1b0ac7f455b4b07969cedd192379e7f4289e39: Status 404 returned error can't find the container with id 8271b5e67a2ec2732b55bb993f1b0ac7f455b4b07969cedd192379e7f4289e39 Oct 11 10:29:52.339835 master-2 kubenswrapper[4776]: I1011 10:29:52.339784 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" event={"ID":"5473628e-94c8-4706-bb03-ff4836debe5f","Type":"ContainerStarted","Data":"8271b5e67a2ec2732b55bb993f1b0ac7f455b4b07969cedd192379e7f4289e39"} Oct 11 10:29:52.342264 master-2 kubenswrapper[4776]: I1011 10:29:52.342130 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" event={"ID":"2c2bfc6c-87cf-45df-8901-abe788ae6d98","Type":"ContainerStarted","Data":"06f6d905d8def3d26d26325a7f849e8c50d0542fa115d03c8952c0b6e0c2eafd"} Oct 11 10:29:52.342264 master-2 kubenswrapper[4776]: I1011 10:29:52.342193 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" event={"ID":"2c2bfc6c-87cf-45df-8901-abe788ae6d98","Type":"ContainerStarted","Data":"cdd2de3fd84dd126318e808cc1775b92798830295eb7937eec44a1eefa782762"} Oct 11 10:29:52.342264 master-2 kubenswrapper[4776]: I1011 10:29:52.342236 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" event={"ID":"2c2bfc6c-87cf-45df-8901-abe788ae6d98","Type":"ContainerStarted","Data":"fb448ff35a94e97cf160c5d407990f97b568932826e467052ce18e54f038fa54"} Oct 11 10:29:52.343850 master-2 kubenswrapper[4776]: I1011 10:29:52.343735 4776 generic.go:334] "Generic (PLEG): container finished" podID="cb15485f-03bd-4281-8626-f35346cf4b0b" containerID="54a301115c8deb7385996e166f74a05ca63fa343e1cab483f27041f8c5a2154c" exitCode=0 Oct 11 10:29:52.343850 master-2 kubenswrapper[4776]: I1011 10:29:52.343796 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-x7xhm" event={"ID":"cb15485f-03bd-4281-8626-f35346cf4b0b","Type":"ContainerDied","Data":"54a301115c8deb7385996e166f74a05ca63fa343e1cab483f27041f8c5a2154c"} Oct 11 10:29:52.367203 master-2 kubenswrapper[4776]: I1011 10:29:52.367045 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-57fbd47578-g6s84" podStartSLOduration=2.176462263 podStartE2EDuration="8.367020649s" podCreationTimestamp="2025-10-11 10:29:44 +0000 UTC" firstStartedPulling="2025-10-11 10:29:45.345861685 +0000 UTC m=+220.130288394" lastFinishedPulling="2025-10-11 10:29:51.536420041 +0000 UTC m=+226.320846780" observedRunningTime="2025-10-11 10:29:52.364261134 +0000 UTC m=+227.148687853" watchObservedRunningTime="2025-10-11 10:29:52.367020649 +0000 UTC m=+227.151447368" Oct 11 10:29:52.969902 master-2 kubenswrapper[4776]: I1011 10:29:52.969827 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:52.969902 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:52.969902 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:52.969902 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:52.970239 master-2 kubenswrapper[4776]: I1011 10:29:52.969921 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:53.352616 master-2 kubenswrapper[4776]: I1011 10:29:53.352539 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-x7xhm" event={"ID":"cb15485f-03bd-4281-8626-f35346cf4b0b","Type":"ContainerStarted","Data":"81d0e718b95a9649bb78b0d797f1b25b8834ef71dd2b7ad870b6bc73505c9984"} Oct 11 10:29:53.352616 master-2 kubenswrapper[4776]: I1011 10:29:53.352624 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-x7xhm" event={"ID":"cb15485f-03bd-4281-8626-f35346cf4b0b","Type":"ContainerStarted","Data":"8c2043e5a51573290d304377951d953849d43114b0c6805feb6ffa45dc2b9f39"} Oct 11 10:29:53.375440 master-2 kubenswrapper[4776]: I1011 10:29:53.374003 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-x7xhm" podStartSLOduration=2.763992289 podStartE2EDuration="9.373985057s" podCreationTimestamp="2025-10-11 10:29:44 +0000 UTC" firstStartedPulling="2025-10-11 10:29:44.92446315 +0000 UTC m=+219.708889849" lastFinishedPulling="2025-10-11 10:29:51.534455868 +0000 UTC m=+226.318882617" observedRunningTime="2025-10-11 10:29:53.373265037 +0000 UTC m=+228.157691746" watchObservedRunningTime="2025-10-11 10:29:53.373985057 +0000 UTC m=+228.158411786" Oct 11 10:29:53.699991 master-2 kubenswrapper[4776]: I1011 10:29:53.699905 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc"] Oct 11 10:29:53.702804 master-2 kubenswrapper[4776]: I1011 10:29:53.701224 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:53.719366 master-2 kubenswrapper[4776]: I1011 10:29:53.719334 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc"] Oct 11 10:29:53.874168 master-2 kubenswrapper[4776]: I1011 10:29:53.874095 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2221fbca-4225-4685-9878-86ab81050ad4-webhook-certs\") pod \"multus-admission-controller-7b6b7bb859-5bmjc\" (UID: \"2221fbca-4225-4685-9878-86ab81050ad4\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:53.874376 master-2 kubenswrapper[4776]: I1011 10:29:53.874225 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426ls\" (UniqueName: \"kubernetes.io/projected/2221fbca-4225-4685-9878-86ab81050ad4-kube-api-access-426ls\") pod \"multus-admission-controller-7b6b7bb859-5bmjc\" (UID: \"2221fbca-4225-4685-9878-86ab81050ad4\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:53.968556 master-2 kubenswrapper[4776]: I1011 10:29:53.968454 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:53.968556 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:53.968556 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:53.968556 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:53.968556 master-2 kubenswrapper[4776]: I1011 10:29:53.968513 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:53.974952 master-2 kubenswrapper[4776]: I1011 10:29:53.974912 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-426ls\" (UniqueName: \"kubernetes.io/projected/2221fbca-4225-4685-9878-86ab81050ad4-kube-api-access-426ls\") pod \"multus-admission-controller-7b6b7bb859-5bmjc\" (UID: \"2221fbca-4225-4685-9878-86ab81050ad4\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:53.975002 master-2 kubenswrapper[4776]: I1011 10:29:53.974962 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2221fbca-4225-4685-9878-86ab81050ad4-webhook-certs\") pod \"multus-admission-controller-7b6b7bb859-5bmjc\" (UID: \"2221fbca-4225-4685-9878-86ab81050ad4\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:53.978031 master-2 kubenswrapper[4776]: I1011 10:29:53.977998 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2221fbca-4225-4685-9878-86ab81050ad4-webhook-certs\") pod \"multus-admission-controller-7b6b7bb859-5bmjc\" (UID: \"2221fbca-4225-4685-9878-86ab81050ad4\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:53.992567 master-2 kubenswrapper[4776]: I1011 10:29:53.992490 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-426ls\" (UniqueName: \"kubernetes.io/projected/2221fbca-4225-4685-9878-86ab81050ad4-kube-api-access-426ls\") pod \"multus-admission-controller-7b6b7bb859-5bmjc\" (UID: \"2221fbca-4225-4685-9878-86ab81050ad4\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:54.029079 master-2 kubenswrapper[4776]: I1011 10:29:54.029049 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" Oct 11 10:29:54.249400 master-2 kubenswrapper[4776]: I1011 10:29:54.249346 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc"] Oct 11 10:29:54.253230 master-2 kubenswrapper[4776]: W1011 10:29:54.253174 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2221fbca_4225_4685_9878_86ab81050ad4.slice/crio-0f7a635d7a63d025acfd4b8de59509dee92c85f7119a06daaf16db4f2f473ca4 WatchSource:0}: Error finding container 0f7a635d7a63d025acfd4b8de59509dee92c85f7119a06daaf16db4f2f473ca4: Status 404 returned error can't find the container with id 0f7a635d7a63d025acfd4b8de59509dee92c85f7119a06daaf16db4f2f473ca4 Oct 11 10:29:54.362367 master-2 kubenswrapper[4776]: I1011 10:29:54.362280 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" event={"ID":"5473628e-94c8-4706-bb03-ff4836debe5f","Type":"ContainerStarted","Data":"bc423808a1318a501a04a81a0b62715e5af3476c9da3fb5de99b8aa1ff2380a0"} Oct 11 10:29:54.362940 master-2 kubenswrapper[4776]: I1011 10:29:54.362370 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:29:54.363713 master-2 kubenswrapper[4776]: I1011 10:29:54.363631 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" event={"ID":"2221fbca-4225-4685-9878-86ab81050ad4","Type":"ContainerStarted","Data":"0f7a635d7a63d025acfd4b8de59509dee92c85f7119a06daaf16db4f2f473ca4"} Oct 11 10:29:54.383138 master-2 kubenswrapper[4776]: I1011 10:29:54.383029 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podStartSLOduration=4.042096784 podStartE2EDuration="5.383010451s" podCreationTimestamp="2025-10-11 10:29:49 +0000 UTC" firstStartedPulling="2025-10-11 10:29:51.976543425 +0000 UTC m=+226.760970154" lastFinishedPulling="2025-10-11 10:29:53.317457112 +0000 UTC m=+228.101883821" observedRunningTime="2025-10-11 10:29:54.380629516 +0000 UTC m=+229.165056235" watchObservedRunningTime="2025-10-11 10:29:54.383010451 +0000 UTC m=+229.167437170" Oct 11 10:29:54.434294 master-2 kubenswrapper[4776]: E1011 10:29:54.434190 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:29:54.435661 master-2 kubenswrapper[4776]: E1011 10:29:54.435603 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:29:54.437415 master-2 kubenswrapper[4776]: E1011 10:29:54.437345 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:29:54.437472 master-2 kubenswrapper[4776]: E1011 10:29:54.437409 4776 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" Oct 11 10:29:54.969895 master-2 kubenswrapper[4776]: I1011 10:29:54.969854 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:54.969895 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:54.969895 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:54.969895 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:54.970284 master-2 kubenswrapper[4776]: I1011 10:29:54.970255 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:55.373333 master-2 kubenswrapper[4776]: I1011 10:29:55.373152 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" event={"ID":"2221fbca-4225-4685-9878-86ab81050ad4","Type":"ContainerStarted","Data":"9469a91709839fac0cc396b2e2a4c0dd37bd9a803d5418f5ea9f88f855d63e81"} Oct 11 10:29:55.373333 master-2 kubenswrapper[4776]: I1011 10:29:55.373219 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" event={"ID":"2221fbca-4225-4685-9878-86ab81050ad4","Type":"ContainerStarted","Data":"2c2a3e675a4041f09f4ce3a6aabd9daff8fd9052dd7ca6fdda6f9f80c3f22749"} Oct 11 10:29:55.394427 master-2 kubenswrapper[4776]: I1011 10:29:55.394323 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc" podStartSLOduration=2.394302946 podStartE2EDuration="2.394302946s" podCreationTimestamp="2025-10-11 10:29:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:29:55.392428095 +0000 UTC m=+230.176854834" watchObservedRunningTime="2025-10-11 10:29:55.394302946 +0000 UTC m=+230.178729675" Oct 11 10:29:55.427852 master-2 kubenswrapper[4776]: I1011 10:29:55.427777 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-s5r5b"] Oct 11 10:29:55.428259 master-2 kubenswrapper[4776]: I1011 10:29:55.428152 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="multus-admission-controller" containerID="cri-o://38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3" gracePeriod=30 Oct 11 10:29:55.428577 master-2 kubenswrapper[4776]: I1011 10:29:55.428481 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="kube-rbac-proxy" containerID="cri-o://4c073c5ee5fd6035e588b594f71843fa0867444b7edf11350aaa49874157a615" gracePeriod=30 Oct 11 10:29:55.969069 master-2 kubenswrapper[4776]: I1011 10:29:55.969001 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:55.969069 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:55.969069 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:55.969069 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:55.969069 master-2 kubenswrapper[4776]: I1011 10:29:55.969065 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:56.382662 master-2 kubenswrapper[4776]: I1011 10:29:56.382481 4776 generic.go:334] "Generic (PLEG): container finished" podID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerID="4c073c5ee5fd6035e588b594f71843fa0867444b7edf11350aaa49874157a615" exitCode=0 Oct 11 10:29:56.382662 master-2 kubenswrapper[4776]: I1011 10:29:56.382586 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" event={"ID":"cbf33a7e-abea-411d-9a19-85cfe67debe3","Type":"ContainerDied","Data":"4c073c5ee5fd6035e588b594f71843fa0867444b7edf11350aaa49874157a615"} Oct 11 10:29:57.031841 master-2 kubenswrapper[4776]: I1011 10:29:57.031752 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:57.031841 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:57.031841 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:57.031841 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:57.032278 master-2 kubenswrapper[4776]: I1011 10:29:57.031846 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:57.968956 master-2 kubenswrapper[4776]: I1011 10:29:57.968905 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:57.968956 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:57.968956 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:57.968956 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:57.969650 master-2 kubenswrapper[4776]: I1011 10:29:57.968963 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:58.704426 master-2 kubenswrapper[4776]: I1011 10:29:58.704136 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp" Oct 11 10:29:58.968912 master-2 kubenswrapper[4776]: I1011 10:29:58.968771 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:58.968912 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:58.968912 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:58.968912 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:58.969792 master-2 kubenswrapper[4776]: I1011 10:29:58.969758 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:29:59.969909 master-2 kubenswrapper[4776]: I1011 10:29:59.969840 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:29:59.969909 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:29:59.969909 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:29:59.969909 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:29:59.969909 master-2 kubenswrapper[4776]: I1011 10:29:59.969907 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:00.403891 master-2 kubenswrapper[4776]: I1011 10:30:00.403843 4776 generic.go:334] "Generic (PLEG): container finished" podID="59763d5b-237f-4095-bf52-86bb0154381c" containerID="3ce0e4c23d3462cc28a54aa78bddda37020e10bc5a0b28d2d4d54aa602abe170" exitCode=0 Oct 11 10:30:00.403891 master-2 kubenswrapper[4776]: I1011 10:30:00.403889 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" event={"ID":"59763d5b-237f-4095-bf52-86bb0154381c","Type":"ContainerDied","Data":"3ce0e4c23d3462cc28a54aa78bddda37020e10bc5a0b28d2d4d54aa602abe170"} Oct 11 10:30:00.404345 master-2 kubenswrapper[4776]: I1011 10:30:00.404324 4776 scope.go:117] "RemoveContainer" containerID="3ce0e4c23d3462cc28a54aa78bddda37020e10bc5a0b28d2d4d54aa602abe170" Oct 11 10:30:00.969314 master-2 kubenswrapper[4776]: I1011 10:30:00.969071 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:00.969314 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:00.969314 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:00.969314 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:00.969314 master-2 kubenswrapper[4776]: I1011 10:30:00.969207 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:01.972039 master-2 kubenswrapper[4776]: I1011 10:30:01.971923 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:01.972039 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:01.972039 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:01.972039 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:01.972039 master-2 kubenswrapper[4776]: I1011 10:30:01.972019 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:02.418017 master-2 kubenswrapper[4776]: I1011 10:30:02.417972 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-7dcf5bd85b-6c2rl" event={"ID":"59763d5b-237f-4095-bf52-86bb0154381c","Type":"ContainerStarted","Data":"a6ee38636b55ee7ea51eb00b2f2e1868a2169757d3514e2b54cde6a87e060504"} Oct 11 10:30:02.969509 master-2 kubenswrapper[4776]: I1011 10:30:02.969426 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:02.969509 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:02.969509 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:02.969509 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:02.969509 master-2 kubenswrapper[4776]: I1011 10:30:02.969503 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:03.970540 master-2 kubenswrapper[4776]: I1011 10:30:03.970445 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:03.970540 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:03.970540 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:03.970540 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:03.970540 master-2 kubenswrapper[4776]: I1011 10:30:03.970536 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:04.435931 master-2 kubenswrapper[4776]: E1011 10:30:04.435720 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:04.438117 master-2 kubenswrapper[4776]: E1011 10:30:04.438066 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:04.441266 master-2 kubenswrapper[4776]: E1011 10:30:04.441201 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:04.441341 master-2 kubenswrapper[4776]: E1011 10:30:04.441287 4776 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:04.972173 master-2 kubenswrapper[4776]: I1011 10:30:04.972004 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:04.972173 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:04.972173 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:04.972173 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:04.972173 master-2 kubenswrapper[4776]: I1011 10:30:04.972088 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:05.969266 master-2 kubenswrapper[4776]: I1011 10:30:05.969180 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:05.969266 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:05.969266 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:05.969266 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:05.969266 master-2 kubenswrapper[4776]: I1011 10:30:05.969266 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:06.970599 master-2 kubenswrapper[4776]: I1011 10:30:06.970419 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:06.970599 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:06.970599 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:06.970599 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:06.970599 master-2 kubenswrapper[4776]: I1011 10:30:06.970483 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:07.968654 master-2 kubenswrapper[4776]: I1011 10:30:07.968602 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:07.968654 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:07.968654 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:07.968654 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:07.969011 master-2 kubenswrapper[4776]: I1011 10:30:07.968665 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:08.969516 master-2 kubenswrapper[4776]: I1011 10:30:08.969456 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:08.969516 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:08.969516 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:08.969516 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:08.970401 master-2 kubenswrapper[4776]: I1011 10:30:08.969518 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:09.969295 master-2 kubenswrapper[4776]: I1011 10:30:09.969242 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:09.969295 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:09.969295 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:09.969295 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:09.969555 master-2 kubenswrapper[4776]: I1011 10:30:09.969315 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:10.969765 master-2 kubenswrapper[4776]: I1011 10:30:10.969694 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:10.969765 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:10.969765 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:10.969765 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:10.970429 master-2 kubenswrapper[4776]: I1011 10:30:10.969769 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:11.970173 master-2 kubenswrapper[4776]: I1011 10:30:11.970086 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:11.970173 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:11.970173 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:11.970173 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:11.971244 master-2 kubenswrapper[4776]: I1011 10:30:11.970170 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:12.958117 master-2 kubenswrapper[4776]: I1011 10:30:12.958024 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-5r2t9"] Oct 11 10:30:12.958778 master-2 kubenswrapper[4776]: I1011 10:30:12.958313 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="multus-admission-controller" containerID="cri-o://70bd3f9c400e0f2d03040b409d0be80f6f7bbda878ae150537f2b4ec7baf71bd" gracePeriod=30 Oct 11 10:30:12.958778 master-2 kubenswrapper[4776]: I1011 10:30:12.958422 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="kube-rbac-proxy" containerID="cri-o://7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829" gracePeriod=30 Oct 11 10:30:12.970855 master-2 kubenswrapper[4776]: I1011 10:30:12.970762 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:12.970855 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:12.970855 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:12.970855 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:12.970855 master-2 kubenswrapper[4776]: I1011 10:30:12.970851 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:13.101048 master-2 kubenswrapper[4776]: I1011 10:30:13.100992 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-kxhjc_7004f3ff-6db8-446d-94c1-1223e975299d/authentication-operator/0.log" Oct 11 10:30:13.301369 master-2 kubenswrapper[4776]: I1011 10:30:13.301268 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-kxhjc_7004f3ff-6db8-446d-94c1-1223e975299d/authentication-operator/1.log" Oct 11 10:30:13.490773 master-2 kubenswrapper[4776]: I1011 10:30:13.490699 4776 generic.go:334] "Generic (PLEG): container finished" podID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerID="7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829" exitCode=0 Oct 11 10:30:13.490773 master-2 kubenswrapper[4776]: I1011 10:30:13.490756 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" event={"ID":"64310b0b-bae1-4ad3-b106-6d59d47d29b2","Type":"ContainerDied","Data":"7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829"} Oct 11 10:30:13.502236 master-2 kubenswrapper[4776]: I1011 10:30:13.502187 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ddb89f76-57kcw_c8cd90ff-e70c-4837-82c4-0fec67a8a51b/router/0.log" Oct 11 10:30:13.895979 master-2 kubenswrapper[4776]: I1011 10:30:13.895894 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-65b6f4d4c9-5wrz6_e350b624-6581-4982-96f3-cd5c37256e02/fix-audit-permissions/0.log" Oct 11 10:30:13.970197 master-2 kubenswrapper[4776]: I1011 10:30:13.970075 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:13.970197 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:13.970197 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:13.970197 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:13.970611 master-2 kubenswrapper[4776]: I1011 10:30:13.970201 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:14.105388 master-2 kubenswrapper[4776]: I1011 10:30:14.105287 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-65b6f4d4c9-5wrz6_e350b624-6581-4982-96f3-cd5c37256e02/oauth-apiserver/0.log" Oct 11 10:30:14.435321 master-2 kubenswrapper[4776]: E1011 10:30:14.435225 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:14.438207 master-2 kubenswrapper[4776]: E1011 10:30:14.436734 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:14.439291 master-2 kubenswrapper[4776]: E1011 10:30:14.439234 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 11 10:30:14.439385 master-2 kubenswrapper[4776]: E1011 10:30:14.439306 4776 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:14.703874 master-2 kubenswrapper[4776]: I1011 10:30:14.703629 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/0.log" Oct 11 10:30:14.904600 master-2 kubenswrapper[4776]: I1011 10:30:14.904501 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/1.log" Oct 11 10:30:14.978660 master-2 kubenswrapper[4776]: I1011 10:30:14.978438 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:14.978660 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:14.978660 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:14.978660 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:14.979012 master-2 kubenswrapper[4776]: I1011 10:30:14.978653 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:15.970400 master-2 kubenswrapper[4776]: I1011 10:30:15.970280 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:15.970400 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:15.970400 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:15.970400 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:15.970400 master-2 kubenswrapper[4776]: I1011 10:30:15.970383 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:16.981723 master-2 kubenswrapper[4776]: I1011 10:30:16.981412 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:16.981723 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:16.981723 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:16.981723 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:16.981723 master-2 kubenswrapper[4776]: I1011 10:30:16.981512 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:17.807373 master-2 kubenswrapper[4776]: I1011 10:30:17.807266 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:30:17.971075 master-2 kubenswrapper[4776]: I1011 10:30:17.970993 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:17.971075 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:17.971075 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:17.971075 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:17.971505 master-2 kubenswrapper[4776]: I1011 10:30:17.971085 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:18.405822 master-2 kubenswrapper[4776]: I1011 10:30:18.405756 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7tbzg_2ee67bf2-b525-4e43-8f3c-be748c32c8d2/kube-multus-additional-cni-plugins/0.log" Oct 11 10:30:18.406292 master-2 kubenswrapper[4776]: I1011 10:30:18.405842 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:30:18.440072 master-2 kubenswrapper[4776]: I1011 10:30:18.439999 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-tuning-conf-dir\") pod \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " Oct 11 10:30:18.440148 master-2 kubenswrapper[4776]: I1011 10:30:18.440108 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-cni-sysctl-allowlist\") pod \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " Oct 11 10:30:18.440148 master-2 kubenswrapper[4776]: I1011 10:30:18.440147 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-kube-api-access-x9qsm\") pod \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " Oct 11 10:30:18.440206 master-2 kubenswrapper[4776]: I1011 10:30:18.440152 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "2ee67bf2-b525-4e43-8f3c-be748c32c8d2" (UID: "2ee67bf2-b525-4e43-8f3c-be748c32c8d2"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:30:18.440206 master-2 kubenswrapper[4776]: I1011 10:30:18.440189 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-ready\") pod \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\" (UID: \"2ee67bf2-b525-4e43-8f3c-be748c32c8d2\") " Oct 11 10:30:18.440446 master-2 kubenswrapper[4776]: I1011 10:30:18.440422 4776 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-tuning-conf-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:18.440821 master-2 kubenswrapper[4776]: I1011 10:30:18.440793 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-ready" (OuterVolumeSpecName: "ready") pod "2ee67bf2-b525-4e43-8f3c-be748c32c8d2" (UID: "2ee67bf2-b525-4e43-8f3c-be748c32c8d2"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:30:18.441435 master-2 kubenswrapper[4776]: I1011 10:30:18.441344 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "2ee67bf2-b525-4e43-8f3c-be748c32c8d2" (UID: "2ee67bf2-b525-4e43-8f3c-be748c32c8d2"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:30:18.444026 master-2 kubenswrapper[4776]: I1011 10:30:18.443952 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-kube-api-access-x9qsm" (OuterVolumeSpecName: "kube-api-access-x9qsm") pod "2ee67bf2-b525-4e43-8f3c-be748c32c8d2" (UID: "2ee67bf2-b525-4e43-8f3c-be748c32c8d2"). InnerVolumeSpecName "kube-api-access-x9qsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:30:18.525121 master-2 kubenswrapper[4776]: I1011 10:30:18.525054 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7tbzg_2ee67bf2-b525-4e43-8f3c-be748c32c8d2/kube-multus-additional-cni-plugins/0.log" Oct 11 10:30:18.525309 master-2 kubenswrapper[4776]: I1011 10:30:18.525162 4776 generic.go:334] "Generic (PLEG): container finished" podID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" exitCode=137 Oct 11 10:30:18.525309 master-2 kubenswrapper[4776]: I1011 10:30:18.525219 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" event={"ID":"2ee67bf2-b525-4e43-8f3c-be748c32c8d2","Type":"ContainerDied","Data":"cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0"} Oct 11 10:30:18.525309 master-2 kubenswrapper[4776]: I1011 10:30:18.525270 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" Oct 11 10:30:18.525405 master-2 kubenswrapper[4776]: I1011 10:30:18.525308 4776 scope.go:117] "RemoveContainer" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" Oct 11 10:30:18.525450 master-2 kubenswrapper[4776]: I1011 10:30:18.525285 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7tbzg" event={"ID":"2ee67bf2-b525-4e43-8f3c-be748c32c8d2","Type":"ContainerDied","Data":"18658a149bb98f078235774d71bf5a21b20cf7314050d8e8690f3080b789d04a"} Oct 11 10:30:18.540973 master-2 kubenswrapper[4776]: I1011 10:30:18.540931 4776 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-cni-sysctl-allowlist\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:18.541053 master-2 kubenswrapper[4776]: I1011 10:30:18.540974 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-kube-api-access-x9qsm\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:18.541053 master-2 kubenswrapper[4776]: I1011 10:30:18.540994 4776 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2ee67bf2-b525-4e43-8f3c-be748c32c8d2-ready\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:18.544372 master-2 kubenswrapper[4776]: I1011 10:30:18.544328 4776 scope.go:117] "RemoveContainer" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" Oct 11 10:30:18.544810 master-2 kubenswrapper[4776]: E1011 10:30:18.544777 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0\": container with ID starting with cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0 not found: ID does not exist" containerID="cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0" Oct 11 10:30:18.544865 master-2 kubenswrapper[4776]: I1011 10:30:18.544814 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0"} err="failed to get container status \"cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0\": rpc error: code = NotFound desc = could not find container \"cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0\": container with ID starting with cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0 not found: ID does not exist" Oct 11 10:30:18.560480 master-2 kubenswrapper[4776]: I1011 10:30:18.560226 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7tbzg"] Oct 11 10:30:18.563647 master-2 kubenswrapper[4776]: I1011 10:30:18.563581 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7tbzg"] Oct 11 10:30:18.702264 master-2 kubenswrapper[4776]: I1011 10:30:18.702186 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/0.log" Oct 11 10:30:18.899189 master-2 kubenswrapper[4776]: I1011 10:30:18.899076 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/kube-rbac-proxy/0.log" Oct 11 10:30:18.969076 master-2 kubenswrapper[4776]: I1011 10:30:18.968950 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:18.969076 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:18.969076 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:18.969076 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:18.969460 master-2 kubenswrapper[4776]: I1011 10:30:18.969426 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:19.106460 master-2 kubenswrapper[4776]: I1011 10:30:19.106433 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ddb89f76-57kcw_c8cd90ff-e70c-4837-82c4-0fec67a8a51b/router/0.log" Oct 11 10:30:19.505715 master-2 kubenswrapper[4776]: I1011 10:30:19.505609 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68f5d95b74-9h5mv_6967590c-695e-4e20-964b-0c643abdf367/kube-apiserver-operator/0.log" Oct 11 10:30:19.704645 master-2 kubenswrapper[4776]: I1011 10:30:19.704587 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68f5d95b74-9h5mv_6967590c-695e-4e20-964b-0c643abdf367/kube-apiserver-operator/1.log" Oct 11 10:30:19.969874 master-2 kubenswrapper[4776]: I1011 10:30:19.969804 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:19.969874 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:19.969874 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:19.969874 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:19.970232 master-2 kubenswrapper[4776]: I1011 10:30:19.969922 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:20.067199 master-2 kubenswrapper[4776]: I1011 10:30:20.067110 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" path="/var/lib/kubelet/pods/2ee67bf2-b525-4e43-8f3c-be748c32c8d2/volumes" Oct 11 10:30:20.103053 master-2 kubenswrapper[4776]: I1011 10:30:20.102996 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-5d85974df9-5gj77_e487f283-7482-463c-90b6-a812e00d0e35/kube-controller-manager-operator/0.log" Oct 11 10:30:20.305169 master-2 kubenswrapper[4776]: I1011 10:30:20.304953 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-5d85974df9-5gj77_e487f283-7482-463c-90b6-a812e00d0e35/kube-controller-manager-operator/1.log" Oct 11 10:30:20.969349 master-2 kubenswrapper[4776]: I1011 10:30:20.969284 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:20.969349 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:20.969349 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:20.969349 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:20.970147 master-2 kubenswrapper[4776]: I1011 10:30:20.969359 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:21.701693 master-2 kubenswrapper[4776]: I1011 10:30:21.701607 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-766d6b44f6-s5shc_58aef476-6586-47bb-bf45-dbeccac6271a/kube-scheduler-operator-container/0.log" Oct 11 10:30:21.904182 master-2 kubenswrapper[4776]: I1011 10:30:21.904105 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-766d6b44f6-s5shc_58aef476-6586-47bb-bf45-dbeccac6271a/kube-scheduler-operator-container/1.log" Oct 11 10:30:21.970391 master-2 kubenswrapper[4776]: I1011 10:30:21.970228 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:21.970391 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:21.970391 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:21.970391 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:21.971085 master-2 kubenswrapper[4776]: I1011 10:30:21.970376 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:22.970111 master-2 kubenswrapper[4776]: I1011 10:30:22.970055 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:22.970111 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:22.970111 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:22.970111 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:22.970542 master-2 kubenswrapper[4776]: I1011 10:30:22.970507 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:23.314350 master-2 kubenswrapper[4776]: I1011 10:30:23.314223 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/cluster-olm-operator/0.log" Oct 11 10:30:23.498085 master-2 kubenswrapper[4776]: I1011 10:30:23.498033 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/copy-catalogd-manifests/0.log" Oct 11 10:30:23.698328 master-2 kubenswrapper[4776]: I1011 10:30:23.698276 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/copy-operator-controller-manifests/0.log" Oct 11 10:30:23.901908 master-2 kubenswrapper[4776]: I1011 10:30:23.901836 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-dczh4_e6e70e9c-b1bd-4f28-911c-fc6ecfd2e8fc/cluster-olm-operator/1.log" Oct 11 10:30:23.970532 master-2 kubenswrapper[4776]: I1011 10:30:23.970389 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:23.970532 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:23.970532 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:23.970532 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:23.970532 master-2 kubenswrapper[4776]: I1011 10:30:23.970494 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:24.106124 master-2 kubenswrapper[4776]: I1011 10:30:24.106059 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7d88655794-7jd4q_f8050d30-444b-40a5-829c-1e3b788910a0/openshift-apiserver-operator/0.log" Oct 11 10:30:24.301641 master-2 kubenswrapper[4776]: I1011 10:30:24.301520 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7d88655794-7jd4q_f8050d30-444b-40a5-829c-1e3b788910a0/openshift-apiserver-operator/1.log" Oct 11 10:30:24.969764 master-2 kubenswrapper[4776]: I1011 10:30:24.969664 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:24.969764 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:24.969764 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:24.969764 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:24.970103 master-2 kubenswrapper[4776]: I1011 10:30:24.969796 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:25.095818 master-2 kubenswrapper[4776]: I1011 10:30:25.095748 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-wmcqt_7a89cb41-fb97-4282-a12d-c6f8d87bc41e/fix-audit-permissions/0.log" Oct 11 10:30:25.303778 master-2 kubenswrapper[4776]: I1011 10:30:25.303606 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-wmcqt_7a89cb41-fb97-4282-a12d-c6f8d87bc41e/openshift-apiserver/0.log" Oct 11 10:30:25.487852 master-2 kubenswrapper[4776]: E1011 10:30:25.487751 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-18658a149bb98f078235774d71bf5a21b20cf7314050d8e8690f3080b789d04a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64310b0b_bae1_4ad3_b106_6d59d47d29b2.slice/crio-7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64310b0b_bae1_4ad3_b106_6d59d47d29b2.slice/crio-conmon-7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-conmon-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:30:25.488102 master-2 kubenswrapper[4776]: E1011 10:30:25.487831 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64310b0b_bae1_4ad3_b106_6d59d47d29b2.slice/crio-conmon-7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-18658a149bb98f078235774d71bf5a21b20cf7314050d8e8690f3080b789d04a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-conmon-cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-conmon-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice\": RecentStats: unable to find data in memory cache]" Oct 11 10:30:25.488146 master-2 kubenswrapper[4776]: E1011 10:30:25.488110 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-conmon-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:30:25.488289 master-2 kubenswrapper[4776]: E1011 10:30:25.488189 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf33a7e_abea_411d_9a19_85cfe67debe3.slice/crio-conmon-38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-conmon-cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64310b0b_bae1_4ad3_b106_6d59d47d29b2.slice/crio-conmon-7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64310b0b_bae1_4ad3_b106_6d59d47d29b2.slice/crio-7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-cd553f64997d2eadb7e637d2153c250a09a8f0707e048e4ec162dc2750e166b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee67bf2_b525_4e43_8f3c_be748c32c8d2.slice/crio-18658a149bb98f078235774d71bf5a21b20cf7314050d8e8690f3080b789d04a\": RecentStats: unable to find data in memory cache]" Oct 11 10:30:25.497570 master-2 kubenswrapper[4776]: I1011 10:30:25.497487 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-555f658fd6-wmcqt_7a89cb41-fb97-4282-a12d-c6f8d87bc41e/openshift-apiserver-check-endpoints/0.log" Oct 11 10:30:25.588012 master-2 kubenswrapper[4776]: I1011 10:30:25.587829 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-s5r5b_cbf33a7e-abea-411d-9a19-85cfe67debe3/multus-admission-controller/0.log" Oct 11 10:30:25.588012 master-2 kubenswrapper[4776]: I1011 10:30:25.587921 4776 generic.go:334] "Generic (PLEG): container finished" podID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerID="38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3" exitCode=137 Oct 11 10:30:25.588012 master-2 kubenswrapper[4776]: I1011 10:30:25.587978 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" event={"ID":"cbf33a7e-abea-411d-9a19-85cfe67debe3","Type":"ContainerDied","Data":"38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3"} Oct 11 10:30:25.702989 master-2 kubenswrapper[4776]: I1011 10:30:25.702914 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/0.log" Oct 11 10:30:25.902766 master-2 kubenswrapper[4776]: I1011 10:30:25.902599 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/1.log" Oct 11 10:30:25.969757 master-2 kubenswrapper[4776]: I1011 10:30:25.969698 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:25.969757 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:25.969757 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:25.969757 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:25.970121 master-2 kubenswrapper[4776]: I1011 10:30:25.969765 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:26.101716 master-2 kubenswrapper[4776]: I1011 10:30:26.100856 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-bq4rs_88129ec6-6f99-42a1-842a-6a965c6b58fe/openshift-controller-manager-operator/0.log" Oct 11 10:30:26.266818 master-2 kubenswrapper[4776]: I1011 10:30:26.266794 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-s5r5b_cbf33a7e-abea-411d-9a19-85cfe67debe3/multus-admission-controller/0.log" Oct 11 10:30:26.267029 master-2 kubenswrapper[4776]: I1011 10:30:26.267016 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:30:26.303803 master-2 kubenswrapper[4776]: I1011 10:30:26.302490 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-bq4rs_88129ec6-6f99-42a1-842a-6a965c6b58fe/openshift-controller-manager-operator/1.log" Oct 11 10:30:26.339748 master-2 kubenswrapper[4776]: I1011 10:30:26.339664 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b7hm\" (UniqueName: \"kubernetes.io/projected/cbf33a7e-abea-411d-9a19-85cfe67debe3-kube-api-access-9b7hm\") pod \"cbf33a7e-abea-411d-9a19-85cfe67debe3\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " Oct 11 10:30:26.339952 master-2 kubenswrapper[4776]: I1011 10:30:26.339830 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") pod \"cbf33a7e-abea-411d-9a19-85cfe67debe3\" (UID: \"cbf33a7e-abea-411d-9a19-85cfe67debe3\") " Oct 11 10:30:26.343148 master-2 kubenswrapper[4776]: I1011 10:30:26.343099 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf33a7e-abea-411d-9a19-85cfe67debe3-kube-api-access-9b7hm" (OuterVolumeSpecName: "kube-api-access-9b7hm") pod "cbf33a7e-abea-411d-9a19-85cfe67debe3" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3"). InnerVolumeSpecName "kube-api-access-9b7hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:30:26.343396 master-2 kubenswrapper[4776]: I1011 10:30:26.343355 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "cbf33a7e-abea-411d-9a19-85cfe67debe3" (UID: "cbf33a7e-abea-411d-9a19-85cfe67debe3"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:30:26.442057 master-2 kubenswrapper[4776]: I1011 10:30:26.441913 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b7hm\" (UniqueName: \"kubernetes.io/projected/cbf33a7e-abea-411d-9a19-85cfe67debe3-kube-api-access-9b7hm\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:26.442057 master-2 kubenswrapper[4776]: I1011 10:30:26.442001 4776 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cbf33a7e-abea-411d-9a19-85cfe67debe3-webhook-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:26.596623 master-2 kubenswrapper[4776]: I1011 10:30:26.596431 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-s5r5b_cbf33a7e-abea-411d-9a19-85cfe67debe3/multus-admission-controller/0.log" Oct 11 10:30:26.596623 master-2 kubenswrapper[4776]: I1011 10:30:26.596508 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" event={"ID":"cbf33a7e-abea-411d-9a19-85cfe67debe3","Type":"ContainerDied","Data":"d97eb420aaba8fb64f806ab9e8c4614e0403881645e49bc4703d4dfecdcf78f8"} Oct 11 10:30:26.596623 master-2 kubenswrapper[4776]: I1011 10:30:26.596547 4776 scope.go:117] "RemoveContainer" containerID="4c073c5ee5fd6035e588b594f71843fa0867444b7edf11350aaa49874157a615" Oct 11 10:30:26.597117 master-2 kubenswrapper[4776]: I1011 10:30:26.596733 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-s5r5b" Oct 11 10:30:26.612922 master-2 kubenswrapper[4776]: I1011 10:30:26.612868 4776 scope.go:117] "RemoveContainer" containerID="38896a8db11a4fd8fd95bb2d7cd5ad911bf611d90ca10e20fa7d0be02d8accb3" Oct 11 10:30:26.636240 master-2 kubenswrapper[4776]: I1011 10:30:26.636174 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-s5r5b"] Oct 11 10:30:26.642205 master-2 kubenswrapper[4776]: I1011 10:30:26.642147 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-s5r5b"] Oct 11 10:30:26.969851 master-2 kubenswrapper[4776]: I1011 10:30:26.969784 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:26.969851 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:26.969851 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:26.969851 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:26.970143 master-2 kubenswrapper[4776]: I1011 10:30:26.969870 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:27.301339 master-2 kubenswrapper[4776]: I1011 10:30:27.301201 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-f966fb6f8-8gkqg_e3281eb7-fb96-4bae-8c55-b79728d426b0/catalog-operator/0.log" Oct 11 10:30:27.703894 master-2 kubenswrapper[4776]: I1011 10:30:27.703849 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-867f8475d9-8lf59_d4354488-1b32-422d-bb06-767a952192a5/olm-operator/0.log" Oct 11 10:30:27.897038 master-2 kubenswrapper[4776]: I1011 10:30:27.896956 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-798cc87f55-xzntp_e20ebc39-150b-472a-bb22-328d8f5db87b/kube-rbac-proxy/0.log" Oct 11 10:30:27.969812 master-2 kubenswrapper[4776]: I1011 10:30:27.969698 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:27.969812 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:27.969812 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:27.969812 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:27.970028 master-2 kubenswrapper[4776]: I1011 10:30:27.969807 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:28.074231 master-2 kubenswrapper[4776]: I1011 10:30:28.074096 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" path="/var/lib/kubelet/pods/cbf33a7e-abea-411d-9a19-85cfe67debe3/volumes" Oct 11 10:30:28.098680 master-2 kubenswrapper[4776]: I1011 10:30:28.098616 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-798cc87f55-xzntp_e20ebc39-150b-472a-bb22-328d8f5db87b/package-server-manager/0.log" Oct 11 10:30:28.501593 master-2 kubenswrapper[4776]: I1011 10:30:28.501519 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-77c85f5c6-cfrh6_4e35cfca-8883-465b-b952-cc91f7f5dd81/packageserver/0.log" Oct 11 10:30:28.968596 master-2 kubenswrapper[4776]: I1011 10:30:28.968517 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:28.968596 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:28.968596 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:28.968596 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:28.968950 master-2 kubenswrapper[4776]: I1011 10:30:28.968631 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:29.969105 master-2 kubenswrapper[4776]: I1011 10:30:29.969058 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:29.969105 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:29.969105 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:29.969105 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:29.969696 master-2 kubenswrapper[4776]: I1011 10:30:29.969113 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:30.268375 master-2 kubenswrapper[4776]: I1011 10:30:30.268229 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:30:30.969502 master-2 kubenswrapper[4776]: I1011 10:30:30.969452 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:30.969502 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:30.969502 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:30.969502 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:30.970054 master-2 kubenswrapper[4776]: I1011 10:30:30.969514 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:31.969754 master-2 kubenswrapper[4776]: I1011 10:30:31.969652 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:31.969754 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:31.969754 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:31.969754 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:31.969754 master-2 kubenswrapper[4776]: I1011 10:30:31.969732 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:32.969995 master-2 kubenswrapper[4776]: I1011 10:30:32.969933 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:32.969995 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:32.969995 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:32.969995 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:32.970655 master-2 kubenswrapper[4776]: I1011 10:30:32.970008 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:33.969890 master-2 kubenswrapper[4776]: I1011 10:30:33.969798 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:33.969890 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:33.969890 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:33.969890 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:33.969890 master-2 kubenswrapper[4776]: I1011 10:30:33.969896 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:34.768881 master-2 kubenswrapper[4776]: E1011 10:30:34.768783 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" podUID="17bef070-1a9d-4090-b97a-7ce2c1c93b19" Oct 11 10:30:34.969389 master-2 kubenswrapper[4776]: I1011 10:30:34.969332 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:34.969389 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:34.969389 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:34.969389 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:34.969636 master-2 kubenswrapper[4776]: I1011 10:30:34.969416 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:35.642814 master-2 kubenswrapper[4776]: I1011 10:30:35.642745 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:30:35.969125 master-2 kubenswrapper[4776]: I1011 10:30:35.969035 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:35.969125 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:35.969125 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:35.969125 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:35.969443 master-2 kubenswrapper[4776]: I1011 10:30:35.969145 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:36.969655 master-2 kubenswrapper[4776]: I1011 10:30:36.969582 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:36.969655 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:36.969655 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:36.969655 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:36.970640 master-2 kubenswrapper[4776]: I1011 10:30:36.969676 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:37.790723 master-2 kubenswrapper[4776]: E1011 10:30:37.790599 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" podUID="cacd2d60-e8a5-450f-a4ad-dfc0194e3325" Oct 11 10:30:37.969499 master-2 kubenswrapper[4776]: I1011 10:30:37.969450 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:37.969499 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:37.969499 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:37.969499 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:37.969499 master-2 kubenswrapper[4776]: I1011 10:30:37.969498 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:38.208735 master-2 kubenswrapper[4776]: I1011 10:30:38.208630 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-2"] Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: E1011 10:30:38.208868 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: I1011 10:30:38.208884 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: E1011 10:30:38.208909 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="multus-admission-controller" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: I1011 10:30:38.208917 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="multus-admission-controller" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: E1011 10:30:38.208927 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="kube-rbac-proxy" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: I1011 10:30:38.208935 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="kube-rbac-proxy" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: I1011 10:30:38.209027 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee67bf2-b525-4e43-8f3c-be748c32c8d2" containerName="kube-multus-additional-cni-plugins" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: I1011 10:30:38.209056 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="multus-admission-controller" Oct 11 10:30:38.209066 master-2 kubenswrapper[4776]: I1011 10:30:38.209068 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf33a7e-abea-411d-9a19-85cfe67debe3" containerName="kube-rbac-proxy" Oct 11 10:30:38.209616 master-2 kubenswrapper[4776]: I1011 10:30:38.209514 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.212203 master-2 kubenswrapper[4776]: I1011 10:30:38.212138 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Oct 11 10:30:38.252479 master-2 kubenswrapper[4776]: I1011 10:30:38.221119 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-2"] Oct 11 10:30:38.384801 master-2 kubenswrapper[4776]: I1011 10:30:38.384656 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-var-lock\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.384801 master-2 kubenswrapper[4776]: I1011 10:30:38.384790 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kube-api-access\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.385250 master-2 kubenswrapper[4776]: I1011 10:30:38.384826 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kubelet-dir\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.485471 master-2 kubenswrapper[4776]: I1011 10:30:38.485316 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-var-lock\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.485471 master-2 kubenswrapper[4776]: I1011 10:30:38.485380 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kube-api-access\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.485471 master-2 kubenswrapper[4776]: I1011 10:30:38.485408 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kubelet-dir\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.485471 master-2 kubenswrapper[4776]: I1011 10:30:38.485419 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-var-lock\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.485850 master-2 kubenswrapper[4776]: I1011 10:30:38.485506 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kubelet-dir\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.512973 master-2 kubenswrapper[4776]: I1011 10:30:38.512894 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kube-api-access\") pod \"installer-1-master-2\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.566120 master-2 kubenswrapper[4776]: I1011 10:30:38.566055 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-2" Oct 11 10:30:38.660019 master-2 kubenswrapper[4776]: I1011 10:30:38.659966 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:30:38.969716 master-2 kubenswrapper[4776]: I1011 10:30:38.969633 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:38.969716 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:38.969716 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:38.969716 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:38.970371 master-2 kubenswrapper[4776]: I1011 10:30:38.969755 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:39.021290 master-2 kubenswrapper[4776]: I1011 10:30:39.021241 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-2"] Oct 11 10:30:39.026195 master-2 kubenswrapper[4776]: W1011 10:30:39.026140 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podebd3d140_91cb_4ec4_91a0_ec45a87da4ea.slice/crio-6698f474fcd951cc464ec2ef01cd4e2bff033b68f3b955d8d604935119d5705f WatchSource:0}: Error finding container 6698f474fcd951cc464ec2ef01cd4e2bff033b68f3b955d8d604935119d5705f: Status 404 returned error can't find the container with id 6698f474fcd951cc464ec2ef01cd4e2bff033b68f3b955d8d604935119d5705f Oct 11 10:30:39.666575 master-2 kubenswrapper[4776]: I1011 10:30:39.666521 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-2" event={"ID":"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea","Type":"ContainerStarted","Data":"6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df"} Oct 11 10:30:39.666575 master-2 kubenswrapper[4776]: I1011 10:30:39.666576 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-2" event={"ID":"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea","Type":"ContainerStarted","Data":"6698f474fcd951cc464ec2ef01cd4e2bff033b68f3b955d8d604935119d5705f"} Oct 11 10:30:39.690006 master-2 kubenswrapper[4776]: I1011 10:30:39.689905 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-2" podStartSLOduration=1.6898792089999999 podStartE2EDuration="1.689879209s" podCreationTimestamp="2025-10-11 10:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:30:39.686153127 +0000 UTC m=+274.470579856" watchObservedRunningTime="2025-10-11 10:30:39.689879209 +0000 UTC m=+274.474305948" Oct 11 10:30:39.702612 master-2 kubenswrapper[4776]: I1011 10:30:39.702558 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:30:39.702783 master-2 kubenswrapper[4776]: E1011 10:30:39.702730 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:30:39.702902 master-2 kubenswrapper[4776]: E1011 10:30:39.702868 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:32:41.70283644 +0000 UTC m=+396.487263189 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:30:39.969547 master-2 kubenswrapper[4776]: I1011 10:30:39.969430 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:39.969547 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:39.969547 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:39.969547 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:39.969805 master-2 kubenswrapper[4776]: I1011 10:30:39.969533 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:40.969282 master-2 kubenswrapper[4776]: I1011 10:30:40.969204 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:40.969282 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:40.969282 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:40.969282 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:40.969935 master-2 kubenswrapper[4776]: I1011 10:30:40.969298 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:41.969443 master-2 kubenswrapper[4776]: I1011 10:30:41.969376 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:41.969443 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:41.969443 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:41.969443 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:41.970113 master-2 kubenswrapper[4776]: I1011 10:30:41.969442 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:42.646907 master-2 kubenswrapper[4776]: I1011 10:30:42.646768 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:30:42.646907 master-2 kubenswrapper[4776]: E1011 10:30:42.646921 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:30:42.647311 master-2 kubenswrapper[4776]: E1011 10:30:42.646978 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:32:44.646962069 +0000 UTC m=+399.431388778 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:30:42.969834 master-2 kubenswrapper[4776]: I1011 10:30:42.969787 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:42.969834 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:42.969834 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:42.969834 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:42.970433 master-2 kubenswrapper[4776]: I1011 10:30:42.969851 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:43.690911 master-2 kubenswrapper[4776]: I1011 10:30:43.690622 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-5r2t9_64310b0b-bae1-4ad3-b106-6d59d47d29b2/multus-admission-controller/0.log" Oct 11 10:30:43.690911 master-2 kubenswrapper[4776]: I1011 10:30:43.690662 4776 generic.go:334] "Generic (PLEG): container finished" podID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerID="70bd3f9c400e0f2d03040b409d0be80f6f7bbda878ae150537f2b4ec7baf71bd" exitCode=137 Oct 11 10:30:43.690911 master-2 kubenswrapper[4776]: I1011 10:30:43.690719 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" event={"ID":"64310b0b-bae1-4ad3-b106-6d59d47d29b2","Type":"ContainerDied","Data":"70bd3f9c400e0f2d03040b409d0be80f6f7bbda878ae150537f2b4ec7baf71bd"} Oct 11 10:30:43.802427 master-2 kubenswrapper[4776]: I1011 10:30:43.802368 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-5r2t9_64310b0b-bae1-4ad3-b106-6d59d47d29b2/multus-admission-controller/0.log" Oct 11 10:30:43.802543 master-2 kubenswrapper[4776]: I1011 10:30:43.802468 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:30:43.863066 master-2 kubenswrapper[4776]: I1011 10:30:43.863000 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plpwg\" (UniqueName: \"kubernetes.io/projected/64310b0b-bae1-4ad3-b106-6d59d47d29b2-kube-api-access-plpwg\") pod \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " Oct 11 10:30:43.863335 master-2 kubenswrapper[4776]: I1011 10:30:43.863093 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") pod \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\" (UID: \"64310b0b-bae1-4ad3-b106-6d59d47d29b2\") " Oct 11 10:30:43.865934 master-2 kubenswrapper[4776]: I1011 10:30:43.865895 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "64310b0b-bae1-4ad3-b106-6d59d47d29b2" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:30:43.866752 master-2 kubenswrapper[4776]: I1011 10:30:43.866710 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64310b0b-bae1-4ad3-b106-6d59d47d29b2-kube-api-access-plpwg" (OuterVolumeSpecName: "kube-api-access-plpwg") pod "64310b0b-bae1-4ad3-b106-6d59d47d29b2" (UID: "64310b0b-bae1-4ad3-b106-6d59d47d29b2"). InnerVolumeSpecName "kube-api-access-plpwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:30:43.964884 master-2 kubenswrapper[4776]: I1011 10:30:43.964813 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plpwg\" (UniqueName: \"kubernetes.io/projected/64310b0b-bae1-4ad3-b106-6d59d47d29b2-kube-api-access-plpwg\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:43.964884 master-2 kubenswrapper[4776]: I1011 10:30:43.964867 4776 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/64310b0b-bae1-4ad3-b106-6d59d47d29b2-webhook-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:30:43.969964 master-2 kubenswrapper[4776]: I1011 10:30:43.969898 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:43.969964 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:43.969964 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:43.969964 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:43.969964 master-2 kubenswrapper[4776]: I1011 10:30:43.969953 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:44.700958 master-2 kubenswrapper[4776]: I1011 10:30:44.700862 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-5r2t9_64310b0b-bae1-4ad3-b106-6d59d47d29b2/multus-admission-controller/0.log" Oct 11 10:30:44.701252 master-2 kubenswrapper[4776]: I1011 10:30:44.700977 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" event={"ID":"64310b0b-bae1-4ad3-b106-6d59d47d29b2","Type":"ContainerDied","Data":"e72cc89f7bb8839ad3fcaec89df9b0ae1c41473603f0bffc6a5201981557d826"} Oct 11 10:30:44.701252 master-2 kubenswrapper[4776]: I1011 10:30:44.701038 4776 scope.go:117] "RemoveContainer" containerID="7fbbe464a06e101f3c0d22eeb9c7ef3680e05cf4fb67606592c87be802acc829" Oct 11 10:30:44.701252 master-2 kubenswrapper[4776]: I1011 10:30:44.701113 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-5r2t9" Oct 11 10:30:44.719612 master-2 kubenswrapper[4776]: I1011 10:30:44.719568 4776 scope.go:117] "RemoveContainer" containerID="70bd3f9c400e0f2d03040b409d0be80f6f7bbda878ae150537f2b4ec7baf71bd" Oct 11 10:30:44.729132 master-2 kubenswrapper[4776]: I1011 10:30:44.729034 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-5r2t9"] Oct 11 10:30:44.733940 master-2 kubenswrapper[4776]: I1011 10:30:44.733895 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-5r2t9"] Oct 11 10:30:44.970105 master-2 kubenswrapper[4776]: I1011 10:30:44.969879 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:44.970105 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:44.970105 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:44.970105 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:44.970105 master-2 kubenswrapper[4776]: I1011 10:30:44.970022 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:45.970266 master-2 kubenswrapper[4776]: I1011 10:30:45.970165 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:45.970266 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:45.970266 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:45.970266 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:45.971265 master-2 kubenswrapper[4776]: I1011 10:30:45.970279 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:46.064164 master-2 kubenswrapper[4776]: I1011 10:30:46.064091 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" path="/var/lib/kubelet/pods/64310b0b-bae1-4ad3-b106-6d59d47d29b2/volumes" Oct 11 10:30:46.969911 master-2 kubenswrapper[4776]: I1011 10:30:46.969834 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:46.969911 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:46.969911 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:46.969911 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:46.969911 master-2 kubenswrapper[4776]: I1011 10:30:46.969897 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:47.494581 master-2 kubenswrapper[4776]: I1011 10:30:47.494505 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rr7vn"] Oct 11 10:30:47.494859 master-2 kubenswrapper[4776]: E1011 10:30:47.494749 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="multus-admission-controller" Oct 11 10:30:47.494859 master-2 kubenswrapper[4776]: I1011 10:30:47.494765 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="multus-admission-controller" Oct 11 10:30:47.494859 master-2 kubenswrapper[4776]: E1011 10:30:47.494785 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="kube-rbac-proxy" Oct 11 10:30:47.494859 master-2 kubenswrapper[4776]: I1011 10:30:47.494793 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="kube-rbac-proxy" Oct 11 10:30:47.495017 master-2 kubenswrapper[4776]: I1011 10:30:47.494907 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="multus-admission-controller" Oct 11 10:30:47.495017 master-2 kubenswrapper[4776]: I1011 10:30:47.494921 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="64310b0b-bae1-4ad3-b106-6d59d47d29b2" containerName="kube-rbac-proxy" Oct 11 10:30:47.495374 master-2 kubenswrapper[4776]: I1011 10:30:47.495337 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:47.498115 master-2 kubenswrapper[4776]: I1011 10:30:47.498063 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 11 10:30:47.498578 master-2 kubenswrapper[4776]: I1011 10:30:47.498531 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 11 10:30:47.498806 master-2 kubenswrapper[4776]: I1011 10:30:47.498760 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 11 10:30:47.505556 master-2 kubenswrapper[4776]: I1011 10:30:47.505495 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rr7vn"] Oct 11 10:30:47.512279 master-2 kubenswrapper[4776]: I1011 10:30:47.512206 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b5880f74-fbfb-498e-9b47-d8d909d240e0-cert\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:47.512417 master-2 kubenswrapper[4776]: I1011 10:30:47.512383 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59pw5\" (UniqueName: \"kubernetes.io/projected/b5880f74-fbfb-498e-9b47-d8d909d240e0-kube-api-access-59pw5\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:47.613971 master-2 kubenswrapper[4776]: I1011 10:30:47.613854 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59pw5\" (UniqueName: \"kubernetes.io/projected/b5880f74-fbfb-498e-9b47-d8d909d240e0-kube-api-access-59pw5\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:47.614219 master-2 kubenswrapper[4776]: I1011 10:30:47.614055 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b5880f74-fbfb-498e-9b47-d8d909d240e0-cert\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:47.614361 master-2 kubenswrapper[4776]: E1011 10:30:47.614318 4776 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Oct 11 10:30:47.614459 master-2 kubenswrapper[4776]: E1011 10:30:47.614420 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5880f74-fbfb-498e-9b47-d8d909d240e0-cert podName:b5880f74-fbfb-498e-9b47-d8d909d240e0 nodeName:}" failed. No retries permitted until 2025-10-11 10:30:48.114389056 +0000 UTC m=+282.898815805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b5880f74-fbfb-498e-9b47-d8d909d240e0-cert") pod "ingress-canary-rr7vn" (UID: "b5880f74-fbfb-498e-9b47-d8d909d240e0") : secret "canary-serving-cert" not found Oct 11 10:30:47.644759 master-2 kubenswrapper[4776]: I1011 10:30:47.644659 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59pw5\" (UniqueName: \"kubernetes.io/projected/b5880f74-fbfb-498e-9b47-d8d909d240e0-kube-api-access-59pw5\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:47.724884 master-2 kubenswrapper[4776]: I1011 10:30:47.724796 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/0.log" Oct 11 10:30:47.724884 master-2 kubenswrapper[4776]: I1011 10:30:47.724876 4776 generic.go:334] "Generic (PLEG): container finished" podID="6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c" containerID="2eff4353493e1e27d8a85bd8e32e0408e179cf5370df38de2ded9a10d5e6c314" exitCode=1 Oct 11 10:30:47.725170 master-2 kubenswrapper[4776]: I1011 10:30:47.724917 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerDied","Data":"2eff4353493e1e27d8a85bd8e32e0408e179cf5370df38de2ded9a10d5e6c314"} Oct 11 10:30:47.725521 master-2 kubenswrapper[4776]: I1011 10:30:47.725472 4776 scope.go:117] "RemoveContainer" containerID="2eff4353493e1e27d8a85bd8e32e0408e179cf5370df38de2ded9a10d5e6c314" Oct 11 10:30:47.810116 master-2 kubenswrapper[4776]: I1011 10:30:47.810056 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:30:47.969479 master-2 kubenswrapper[4776]: I1011 10:30:47.969427 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:47.969479 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:47.969479 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:47.969479 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:47.969832 master-2 kubenswrapper[4776]: I1011 10:30:47.969498 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:48.120836 master-2 kubenswrapper[4776]: I1011 10:30:48.120662 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b5880f74-fbfb-498e-9b47-d8d909d240e0-cert\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:48.126954 master-2 kubenswrapper[4776]: I1011 10:30:48.126866 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b5880f74-fbfb-498e-9b47-d8d909d240e0-cert\") pod \"ingress-canary-rr7vn\" (UID: \"b5880f74-fbfb-498e-9b47-d8d909d240e0\") " pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:48.418154 master-2 kubenswrapper[4776]: I1011 10:30:48.418109 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rr7vn" Oct 11 10:30:48.731016 master-2 kubenswrapper[4776]: I1011 10:30:48.730898 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/0.log" Oct 11 10:30:48.731016 master-2 kubenswrapper[4776]: I1011 10:30:48.730952 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerStarted","Data":"9d1e12fc2f0f72ed70b132b8b498f2531a6ae8ae01d1a1a52b35463b0839dedd"} Oct 11 10:30:48.829981 master-2 kubenswrapper[4776]: I1011 10:30:48.829924 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rr7vn"] Oct 11 10:30:48.836132 master-2 kubenswrapper[4776]: W1011 10:30:48.835637 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5880f74_fbfb_498e_9b47_d8d909d240e0.slice/crio-e4ee22c8d31650be987ce1a5eb6d3d06ac9f7a3c9e2a93bf1334f4054f0147ac WatchSource:0}: Error finding container e4ee22c8d31650be987ce1a5eb6d3d06ac9f7a3c9e2a93bf1334f4054f0147ac: Status 404 returned error can't find the container with id e4ee22c8d31650be987ce1a5eb6d3d06ac9f7a3c9e2a93bf1334f4054f0147ac Oct 11 10:30:48.969668 master-2 kubenswrapper[4776]: I1011 10:30:48.969610 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:48.969668 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:48.969668 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:48.969668 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:48.969933 master-2 kubenswrapper[4776]: I1011 10:30:48.969695 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:49.737694 master-2 kubenswrapper[4776]: I1011 10:30:49.737585 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rr7vn" event={"ID":"b5880f74-fbfb-498e-9b47-d8d909d240e0","Type":"ContainerStarted","Data":"e9ab3ffb93281f8f6467ace2afc4934aa5d5c892f91a90c2d600053751198afa"} Oct 11 10:30:49.738195 master-2 kubenswrapper[4776]: I1011 10:30:49.738178 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rr7vn" event={"ID":"b5880f74-fbfb-498e-9b47-d8d909d240e0","Type":"ContainerStarted","Data":"e4ee22c8d31650be987ce1a5eb6d3d06ac9f7a3c9e2a93bf1334f4054f0147ac"} Oct 11 10:30:49.763163 master-2 kubenswrapper[4776]: I1011 10:30:49.763067 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rr7vn" podStartSLOduration=2.76304178 podStartE2EDuration="2.76304178s" podCreationTimestamp="2025-10-11 10:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:30:49.758427535 +0000 UTC m=+284.542854264" watchObservedRunningTime="2025-10-11 10:30:49.76304178 +0000 UTC m=+284.547468489" Oct 11 10:30:49.969863 master-2 kubenswrapper[4776]: I1011 10:30:49.969805 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:49.969863 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:49.969863 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:49.969863 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:49.970471 master-2 kubenswrapper[4776]: I1011 10:30:49.970426 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:50.811600 master-2 kubenswrapper[4776]: I1011 10:30:50.811552 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-1-master-2"] Oct 11 10:30:50.812329 master-2 kubenswrapper[4776]: I1011 10:30:50.812303 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/installer-1-master-2" podUID="ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" containerName="installer" containerID="cri-o://6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df" gracePeriod=30 Oct 11 10:30:50.968985 master-2 kubenswrapper[4776]: I1011 10:30:50.968902 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:50.968985 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:50.968985 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:50.968985 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:50.968985 master-2 kubenswrapper[4776]: I1011 10:30:50.968960 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:51.970149 master-2 kubenswrapper[4776]: I1011 10:30:51.970079 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:51.970149 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:51.970149 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:51.970149 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:51.970149 master-2 kubenswrapper[4776]: I1011 10:30:51.970150 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:52.969526 master-2 kubenswrapper[4776]: I1011 10:30:52.969452 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:52.969526 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:52.969526 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:52.969526 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:52.969951 master-2 kubenswrapper[4776]: I1011 10:30:52.969540 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:53.969482 master-2 kubenswrapper[4776]: I1011 10:30:53.969402 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:53.969482 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:53.969482 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:53.969482 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:53.970191 master-2 kubenswrapper[4776]: I1011 10:30:53.969482 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:54.605609 master-2 kubenswrapper[4776]: I1011 10:30:54.605503 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-2"] Oct 11 10:30:54.606503 master-2 kubenswrapper[4776]: I1011 10:30:54.606454 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.615730 master-2 kubenswrapper[4776]: I1011 10:30:54.615636 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-2"] Oct 11 10:30:54.803218 master-2 kubenswrapper[4776]: I1011 10:30:54.803162 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-var-lock\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.803218 master-2 kubenswrapper[4776]: I1011 10:30:54.803213 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kube-api-access\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.803526 master-2 kubenswrapper[4776]: I1011 10:30:54.803242 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kubelet-dir\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.904181 master-2 kubenswrapper[4776]: I1011 10:30:54.903904 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-var-lock\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.904181 master-2 kubenswrapper[4776]: I1011 10:30:54.903987 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kube-api-access\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.904181 master-2 kubenswrapper[4776]: I1011 10:30:54.904014 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kubelet-dir\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.904181 master-2 kubenswrapper[4776]: I1011 10:30:54.904015 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-var-lock\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.904181 master-2 kubenswrapper[4776]: I1011 10:30:54.904083 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kubelet-dir\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.926341 master-2 kubenswrapper[4776]: I1011 10:30:54.926284 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kube-api-access\") pod \"installer-2-master-2\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.932383 master-2 kubenswrapper[4776]: I1011 10:30:54.932278 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-2" Oct 11 10:30:54.969587 master-2 kubenswrapper[4776]: I1011 10:30:54.969507 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:54.969587 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:54.969587 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:54.969587 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:54.969587 master-2 kubenswrapper[4776]: I1011 10:30:54.969559 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:55.332417 master-2 kubenswrapper[4776]: I1011 10:30:55.332325 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-2"] Oct 11 10:30:55.338601 master-2 kubenswrapper[4776]: W1011 10:30:55.338528 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod89edf964_f01b_4eaf_b627_9efa53a8f6d8.slice/crio-71e4b9dc7c050600ddc194a7eb0dee91146887aac61cb63950776221eacb81dc WatchSource:0}: Error finding container 71e4b9dc7c050600ddc194a7eb0dee91146887aac61cb63950776221eacb81dc: Status 404 returned error can't find the container with id 71e4b9dc7c050600ddc194a7eb0dee91146887aac61cb63950776221eacb81dc Oct 11 10:30:55.766117 master-2 kubenswrapper[4776]: I1011 10:30:55.766071 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-2" event={"ID":"89edf964-f01b-4eaf-b627-9efa53a8f6d8","Type":"ContainerStarted","Data":"e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443"} Oct 11 10:30:55.766117 master-2 kubenswrapper[4776]: I1011 10:30:55.766122 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-2" event={"ID":"89edf964-f01b-4eaf-b627-9efa53a8f6d8","Type":"ContainerStarted","Data":"71e4b9dc7c050600ddc194a7eb0dee91146887aac61cb63950776221eacb81dc"} Oct 11 10:30:55.792499 master-2 kubenswrapper[4776]: I1011 10:30:55.792422 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-2" podStartSLOduration=1.792404978 podStartE2EDuration="1.792404978s" podCreationTimestamp="2025-10-11 10:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:30:55.789970592 +0000 UTC m=+290.574397311" watchObservedRunningTime="2025-10-11 10:30:55.792404978 +0000 UTC m=+290.576831697" Oct 11 10:30:55.969639 master-2 kubenswrapper[4776]: I1011 10:30:55.969481 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:55.969639 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:55.969639 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:55.969639 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:55.969639 master-2 kubenswrapper[4776]: I1011 10:30:55.969544 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:56.969976 master-2 kubenswrapper[4776]: I1011 10:30:56.969876 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:56.969976 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:56.969976 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:56.969976 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:56.969976 master-2 kubenswrapper[4776]: I1011 10:30:56.969941 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:57.975553 master-2 kubenswrapper[4776]: I1011 10:30:57.975513 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:57.975553 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:57.975553 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:57.975553 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:57.976388 master-2 kubenswrapper[4776]: I1011 10:30:57.976141 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:58.970200 master-2 kubenswrapper[4776]: I1011 10:30:58.970138 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:58.970200 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:58.970200 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:58.970200 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:58.970864 master-2 kubenswrapper[4776]: I1011 10:30:58.970209 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:30:59.809922 master-2 kubenswrapper[4776]: I1011 10:30:59.809850 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-2-master-2"] Oct 11 10:30:59.810750 master-2 kubenswrapper[4776]: I1011 10:30:59.810136 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/installer-2-master-2" podUID="89edf964-f01b-4eaf-b627-9efa53a8f6d8" containerName="installer" containerID="cri-o://e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443" gracePeriod=30 Oct 11 10:30:59.968867 master-2 kubenswrapper[4776]: I1011 10:30:59.968770 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:30:59.968867 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:30:59.968867 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:30:59.968867 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:30:59.969163 master-2 kubenswrapper[4776]: I1011 10:30:59.968870 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:00.265947 master-2 kubenswrapper[4776]: I1011 10:31:00.265751 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-2_89edf964-f01b-4eaf-b627-9efa53a8f6d8/installer/0.log" Oct 11 10:31:00.265947 master-2 kubenswrapper[4776]: I1011 10:31:00.265859 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-2" Oct 11 10:31:00.370999 master-2 kubenswrapper[4776]: I1011 10:31:00.370954 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-var-lock\") pod \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " Oct 11 10:31:00.371197 master-2 kubenswrapper[4776]: I1011 10:31:00.371073 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-var-lock" (OuterVolumeSpecName: "var-lock") pod "89edf964-f01b-4eaf-b627-9efa53a8f6d8" (UID: "89edf964-f01b-4eaf-b627-9efa53a8f6d8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:00.371197 master-2 kubenswrapper[4776]: I1011 10:31:00.371097 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kube-api-access\") pod \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " Oct 11 10:31:00.371268 master-2 kubenswrapper[4776]: I1011 10:31:00.371252 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kubelet-dir\") pod \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\" (UID: \"89edf964-f01b-4eaf-b627-9efa53a8f6d8\") " Oct 11 10:31:00.371402 master-2 kubenswrapper[4776]: I1011 10:31:00.371364 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "89edf964-f01b-4eaf-b627-9efa53a8f6d8" (UID: "89edf964-f01b-4eaf-b627-9efa53a8f6d8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:00.371783 master-2 kubenswrapper[4776]: I1011 10:31:00.371761 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:00.371837 master-2 kubenswrapper[4776]: I1011 10:31:00.371785 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89edf964-f01b-4eaf-b627-9efa53a8f6d8-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:00.374423 master-2 kubenswrapper[4776]: I1011 10:31:00.374361 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "89edf964-f01b-4eaf-b627-9efa53a8f6d8" (UID: "89edf964-f01b-4eaf-b627-9efa53a8f6d8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:31:00.473234 master-2 kubenswrapper[4776]: I1011 10:31:00.473155 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89edf964-f01b-4eaf-b627-9efa53a8f6d8-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:00.802721 master-2 kubenswrapper[4776]: I1011 10:31:00.802646 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-2_89edf964-f01b-4eaf-b627-9efa53a8f6d8/installer/0.log" Oct 11 10:31:00.802958 master-2 kubenswrapper[4776]: I1011 10:31:00.802727 4776 generic.go:334] "Generic (PLEG): container finished" podID="89edf964-f01b-4eaf-b627-9efa53a8f6d8" containerID="e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443" exitCode=1 Oct 11 10:31:00.802958 master-2 kubenswrapper[4776]: I1011 10:31:00.802766 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-2" event={"ID":"89edf964-f01b-4eaf-b627-9efa53a8f6d8","Type":"ContainerDied","Data":"e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443"} Oct 11 10:31:00.802958 master-2 kubenswrapper[4776]: I1011 10:31:00.802795 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-2" event={"ID":"89edf964-f01b-4eaf-b627-9efa53a8f6d8","Type":"ContainerDied","Data":"71e4b9dc7c050600ddc194a7eb0dee91146887aac61cb63950776221eacb81dc"} Oct 11 10:31:00.802958 master-2 kubenswrapper[4776]: I1011 10:31:00.802816 4776 scope.go:117] "RemoveContainer" containerID="e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443" Oct 11 10:31:00.802958 master-2 kubenswrapper[4776]: I1011 10:31:00.802933 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-2" Oct 11 10:31:00.829203 master-2 kubenswrapper[4776]: I1011 10:31:00.829162 4776 scope.go:117] "RemoveContainer" containerID="e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443" Oct 11 10:31:00.829637 master-2 kubenswrapper[4776]: E1011 10:31:00.829596 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443\": container with ID starting with e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443 not found: ID does not exist" containerID="e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443" Oct 11 10:31:00.829692 master-2 kubenswrapper[4776]: I1011 10:31:00.829641 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443"} err="failed to get container status \"e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443\": rpc error: code = NotFound desc = could not find container \"e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443\": container with ID starting with e4f912a3f1b7558e5f8fa4a83be5d7422b4079fe088eb743626ced1f90f96443 not found: ID does not exist" Oct 11 10:31:00.844190 master-2 kubenswrapper[4776]: I1011 10:31:00.844129 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-2-master-2"] Oct 11 10:31:00.854751 master-2 kubenswrapper[4776]: I1011 10:31:00.854657 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-2-master-2"] Oct 11 10:31:00.971533 master-2 kubenswrapper[4776]: I1011 10:31:00.971439 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:00.971533 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:00.971533 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:00.971533 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:00.972043 master-2 kubenswrapper[4776]: I1011 10:31:00.971549 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:01.969820 master-2 kubenswrapper[4776]: I1011 10:31:01.969738 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:01.969820 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:01.969820 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:01.969820 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:01.970797 master-2 kubenswrapper[4776]: I1011 10:31:01.969827 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:02.066393 master-2 kubenswrapper[4776]: I1011 10:31:02.066301 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89edf964-f01b-4eaf-b627-9efa53a8f6d8" path="/var/lib/kubelet/pods/89edf964-f01b-4eaf-b627-9efa53a8f6d8/volumes" Oct 11 10:31:02.969249 master-2 kubenswrapper[4776]: I1011 10:31:02.969196 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:02.969249 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:02.969249 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:02.969249 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:02.969501 master-2 kubenswrapper[4776]: I1011 10:31:02.969269 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:03.206845 master-2 kubenswrapper[4776]: I1011 10:31:03.206781 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-3-master-2"] Oct 11 10:31:03.207499 master-2 kubenswrapper[4776]: E1011 10:31:03.207045 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89edf964-f01b-4eaf-b627-9efa53a8f6d8" containerName="installer" Oct 11 10:31:03.207499 master-2 kubenswrapper[4776]: I1011 10:31:03.207062 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="89edf964-f01b-4eaf-b627-9efa53a8f6d8" containerName="installer" Oct 11 10:31:03.207499 master-2 kubenswrapper[4776]: I1011 10:31:03.207173 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="89edf964-f01b-4eaf-b627-9efa53a8f6d8" containerName="installer" Oct 11 10:31:03.207758 master-2 kubenswrapper[4776]: I1011 10:31:03.207726 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.219542 master-2 kubenswrapper[4776]: I1011 10:31:03.219439 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-3-master-2"] Oct 11 10:31:03.310820 master-2 kubenswrapper[4776]: I1011 10:31:03.310730 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-var-lock\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.310820 master-2 kubenswrapper[4776]: I1011 10:31:03.310802 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kube-api-access\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.311309 master-2 kubenswrapper[4776]: I1011 10:31:03.311221 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kubelet-dir\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.412393 master-2 kubenswrapper[4776]: I1011 10:31:03.412339 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-var-lock\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.412393 master-2 kubenswrapper[4776]: I1011 10:31:03.412403 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kube-api-access\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.412758 master-2 kubenswrapper[4776]: I1011 10:31:03.412466 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kubelet-dir\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.412758 master-2 kubenswrapper[4776]: I1011 10:31:03.412615 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kubelet-dir\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.413012 master-2 kubenswrapper[4776]: I1011 10:31:03.412972 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-var-lock\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.756176 master-2 kubenswrapper[4776]: I1011 10:31:03.756096 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kube-api-access\") pod \"installer-3-master-2\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.822264 master-2 kubenswrapper[4776]: I1011 10:31:03.822188 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:03.974231 master-2 kubenswrapper[4776]: I1011 10:31:03.973715 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:03.974231 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:03.974231 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:03.974231 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:03.974231 master-2 kubenswrapper[4776]: I1011 10:31:03.973814 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:04.277704 master-2 kubenswrapper[4776]: I1011 10:31:04.277624 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-3-master-2"] Oct 11 10:31:04.282204 master-2 kubenswrapper[4776]: W1011 10:31:04.282127 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podff524bb0_602a_4579_bac9_c3f5c19ec9ba.slice/crio-caf35abea272da4e2eae59b8aeddbb02fa8fc9c63850f420fe7ddea3a3fbfdf7 WatchSource:0}: Error finding container caf35abea272da4e2eae59b8aeddbb02fa8fc9c63850f420fe7ddea3a3fbfdf7: Status 404 returned error can't find the container with id caf35abea272da4e2eae59b8aeddbb02fa8fc9c63850f420fe7ddea3a3fbfdf7 Oct 11 10:31:04.832095 master-2 kubenswrapper[4776]: I1011 10:31:04.832025 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-3-master-2" event={"ID":"ff524bb0-602a-4579-bac9-c3f5c19ec9ba","Type":"ContainerStarted","Data":"5dcd1c043c2c18cfa07bcf1cba0c1e16ed116132974cf974809ac324fe8a6c21"} Oct 11 10:31:04.832095 master-2 kubenswrapper[4776]: I1011 10:31:04.832077 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-3-master-2" event={"ID":"ff524bb0-602a-4579-bac9-c3f5c19ec9ba","Type":"ContainerStarted","Data":"caf35abea272da4e2eae59b8aeddbb02fa8fc9c63850f420fe7ddea3a3fbfdf7"} Oct 11 10:31:04.853769 master-2 kubenswrapper[4776]: I1011 10:31:04.853699 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-3-master-2" podStartSLOduration=1.853667879 podStartE2EDuration="1.853667879s" podCreationTimestamp="2025-10-11 10:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:31:04.852460556 +0000 UTC m=+299.636887305" watchObservedRunningTime="2025-10-11 10:31:04.853667879 +0000 UTC m=+299.638094588" Oct 11 10:31:04.970245 master-2 kubenswrapper[4776]: I1011 10:31:04.970110 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:04.970245 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:04.970245 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:04.970245 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:04.970245 master-2 kubenswrapper[4776]: I1011 10:31:04.970209 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:05.925729 master-2 kubenswrapper[4776]: I1011 10:31:05.925632 4776 kubelet.go:1505] "Image garbage collection succeeded" Oct 11 10:31:05.969149 master-2 kubenswrapper[4776]: I1011 10:31:05.969087 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:05.969149 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:05.969149 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:05.969149 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:05.969426 master-2 kubenswrapper[4776]: I1011 10:31:05.969161 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:06.968831 master-2 kubenswrapper[4776]: I1011 10:31:06.968741 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:06.968831 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:06.968831 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:06.968831 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:06.968831 master-2 kubenswrapper[4776]: I1011 10:31:06.968834 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:07.969637 master-2 kubenswrapper[4776]: I1011 10:31:07.969530 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:07.969637 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:07.969637 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:07.969637 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:07.969637 master-2 kubenswrapper[4776]: I1011 10:31:07.969602 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:08.970062 master-2 kubenswrapper[4776]: I1011 10:31:08.969978 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:08.970062 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:08.970062 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:08.970062 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:08.970909 master-2 kubenswrapper[4776]: I1011 10:31:08.970069 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:09.970736 master-2 kubenswrapper[4776]: I1011 10:31:09.970606 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:09.970736 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:09.970736 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:09.970736 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:09.970736 master-2 kubenswrapper[4776]: I1011 10:31:09.970711 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:10.120051 master-2 kubenswrapper[4776]: E1011 10:31:10.119942 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podebd3d140_91cb_4ec4_91a0_ec45a87da4ea.slice/crio-6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podebd3d140_91cb_4ec4_91a0_ec45a87da4ea.slice/crio-6698f474fcd951cc464ec2ef01cd4e2bff033b68f3b955d8d604935119d5705f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podebd3d140_91cb_4ec4_91a0_ec45a87da4ea.slice/crio-conmon-6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:31:10.489757 master-2 kubenswrapper[4776]: I1011 10:31:10.489644 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-2_ebd3d140-91cb-4ec4-91a0-ec45a87da4ea/installer/0.log" Oct 11 10:31:10.489757 master-2 kubenswrapper[4776]: I1011 10:31:10.489736 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-2" Oct 11 10:31:10.600106 master-2 kubenswrapper[4776]: I1011 10:31:10.599836 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kube-api-access\") pod \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " Oct 11 10:31:10.600106 master-2 kubenswrapper[4776]: I1011 10:31:10.599917 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kubelet-dir\") pod \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " Oct 11 10:31:10.600106 master-2 kubenswrapper[4776]: I1011 10:31:10.599993 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-var-lock\") pod \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\" (UID: \"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea\") " Oct 11 10:31:10.600570 master-2 kubenswrapper[4776]: I1011 10:31:10.600114 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" (UID: "ebd3d140-91cb-4ec4-91a0-ec45a87da4ea"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:10.600570 master-2 kubenswrapper[4776]: I1011 10:31:10.600198 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" (UID: "ebd3d140-91cb-4ec4-91a0-ec45a87da4ea"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:10.600570 master-2 kubenswrapper[4776]: I1011 10:31:10.600389 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:10.600570 master-2 kubenswrapper[4776]: I1011 10:31:10.600414 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:10.602457 master-2 kubenswrapper[4776]: I1011 10:31:10.602388 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" (UID: "ebd3d140-91cb-4ec4-91a0-ec45a87da4ea"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:31:10.701134 master-2 kubenswrapper[4776]: I1011 10:31:10.701074 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:10.867243 master-2 kubenswrapper[4776]: I1011 10:31:10.867118 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-2_ebd3d140-91cb-4ec4-91a0-ec45a87da4ea/installer/0.log" Oct 11 10:31:10.867243 master-2 kubenswrapper[4776]: I1011 10:31:10.867191 4776 generic.go:334] "Generic (PLEG): container finished" podID="ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" containerID="6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df" exitCode=1 Oct 11 10:31:10.867243 master-2 kubenswrapper[4776]: I1011 10:31:10.867228 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-2" event={"ID":"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea","Type":"ContainerDied","Data":"6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df"} Oct 11 10:31:10.867472 master-2 kubenswrapper[4776]: I1011 10:31:10.867248 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-2" Oct 11 10:31:10.867472 master-2 kubenswrapper[4776]: I1011 10:31:10.867275 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-2" event={"ID":"ebd3d140-91cb-4ec4-91a0-ec45a87da4ea","Type":"ContainerDied","Data":"6698f474fcd951cc464ec2ef01cd4e2bff033b68f3b955d8d604935119d5705f"} Oct 11 10:31:10.867472 master-2 kubenswrapper[4776]: I1011 10:31:10.867316 4776 scope.go:117] "RemoveContainer" containerID="6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df" Oct 11 10:31:10.880345 master-2 kubenswrapper[4776]: I1011 10:31:10.880293 4776 scope.go:117] "RemoveContainer" containerID="6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df" Oct 11 10:31:10.880953 master-2 kubenswrapper[4776]: E1011 10:31:10.880891 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df\": container with ID starting with 6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df not found: ID does not exist" containerID="6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df" Oct 11 10:31:10.881075 master-2 kubenswrapper[4776]: I1011 10:31:10.880948 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df"} err="failed to get container status \"6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df\": rpc error: code = NotFound desc = could not find container \"6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df\": container with ID starting with 6b2640c1c6a57a7701c97f67c731efffa7d60da0f4572396d542a005be30e0df not found: ID does not exist" Oct 11 10:31:10.911981 master-2 kubenswrapper[4776]: I1011 10:31:10.911904 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-1-master-2"] Oct 11 10:31:10.915126 master-2 kubenswrapper[4776]: I1011 10:31:10.915079 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-1-master-2"] Oct 11 10:31:10.969483 master-2 kubenswrapper[4776]: I1011 10:31:10.969419 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:10.969483 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:10.969483 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:10.969483 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:10.969748 master-2 kubenswrapper[4776]: I1011 10:31:10.969507 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:11.970344 master-2 kubenswrapper[4776]: I1011 10:31:11.970225 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:11.970344 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:11.970344 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:11.970344 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:11.971282 master-2 kubenswrapper[4776]: I1011 10:31:11.970376 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:12.070253 master-2 kubenswrapper[4776]: I1011 10:31:12.070168 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" path="/var/lib/kubelet/pods/ebd3d140-91cb-4ec4-91a0-ec45a87da4ea/volumes" Oct 11 10:31:12.970513 master-2 kubenswrapper[4776]: I1011 10:31:12.970400 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:12.970513 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:12.970513 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:12.970513 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:12.971235 master-2 kubenswrapper[4776]: I1011 10:31:12.970558 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:13.970412 master-2 kubenswrapper[4776]: I1011 10:31:13.970300 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:13.970412 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:13.970412 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:13.970412 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:13.971358 master-2 kubenswrapper[4776]: I1011 10:31:13.970423 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:14.970919 master-2 kubenswrapper[4776]: I1011 10:31:14.970775 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:14.970919 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:14.970919 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:14.970919 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:14.970919 master-2 kubenswrapper[4776]: I1011 10:31:14.970875 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:15.970560 master-2 kubenswrapper[4776]: I1011 10:31:15.970444 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:15.970560 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:15.970560 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:15.970560 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:15.970560 master-2 kubenswrapper[4776]: I1011 10:31:15.970563 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:16.970993 master-2 kubenswrapper[4776]: I1011 10:31:16.970896 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:16.970993 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:16.970993 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:16.970993 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:16.970993 master-2 kubenswrapper[4776]: I1011 10:31:16.970970 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:17.807920 master-2 kubenswrapper[4776]: I1011 10:31:17.807823 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:31:17.969538 master-2 kubenswrapper[4776]: I1011 10:31:17.969407 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:17.969538 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:17.969538 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:17.969538 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:17.969538 master-2 kubenswrapper[4776]: I1011 10:31:17.969507 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:18.970022 master-2 kubenswrapper[4776]: I1011 10:31:18.969917 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:18.970022 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:18.970022 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:18.970022 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:18.970022 master-2 kubenswrapper[4776]: I1011 10:31:18.970011 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:19.969937 master-2 kubenswrapper[4776]: I1011 10:31:19.969845 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:19.969937 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:19.969937 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:19.969937 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:19.969937 master-2 kubenswrapper[4776]: I1011 10:31:19.969928 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:20.969610 master-2 kubenswrapper[4776]: I1011 10:31:20.969545 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:20.969610 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:20.969610 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:20.969610 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:20.969951 master-2 kubenswrapper[4776]: I1011 10:31:20.969643 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:21.970364 master-2 kubenswrapper[4776]: I1011 10:31:21.970264 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:21.970364 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:21.970364 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:21.970364 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:21.971860 master-2 kubenswrapper[4776]: I1011 10:31:21.970385 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:22.970354 master-2 kubenswrapper[4776]: I1011 10:31:22.970233 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:22.970354 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:22.970354 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:22.970354 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:22.971324 master-2 kubenswrapper[4776]: I1011 10:31:22.970371 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:23.971165 master-2 kubenswrapper[4776]: I1011 10:31:23.971065 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:23.971165 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:23.971165 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:23.971165 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:23.971165 master-2 kubenswrapper[4776]: I1011 10:31:23.971159 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:24.970230 master-2 kubenswrapper[4776]: I1011 10:31:24.970119 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:24.970230 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:24.970230 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:24.970230 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:24.970518 master-2 kubenswrapper[4776]: I1011 10:31:24.970280 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:25.969512 master-2 kubenswrapper[4776]: I1011 10:31:25.969428 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:25.969512 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:25.969512 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:25.969512 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:25.969512 master-2 kubenswrapper[4776]: I1011 10:31:25.969504 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:26.969216 master-2 kubenswrapper[4776]: I1011 10:31:26.969136 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:26.969216 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:26.969216 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:26.969216 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:26.970436 master-2 kubenswrapper[4776]: I1011 10:31:26.969245 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:27.969382 master-2 kubenswrapper[4776]: I1011 10:31:27.969263 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:27.969382 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:27.969382 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:27.969382 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:27.970817 master-2 kubenswrapper[4776]: I1011 10:31:27.969395 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:28.969572 master-2 kubenswrapper[4776]: I1011 10:31:28.969489 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:28.969572 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:28.969572 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:28.969572 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:28.969572 master-2 kubenswrapper[4776]: I1011 10:31:28.969552 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:29.969641 master-2 kubenswrapper[4776]: I1011 10:31:29.969566 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:31:29.969641 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:31:29.969641 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:31:29.969641 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:31:29.969641 master-2 kubenswrapper[4776]: I1011 10:31:29.969637 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:29.970808 master-2 kubenswrapper[4776]: I1011 10:31:29.969722 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:31:29.970808 master-2 kubenswrapper[4776]: I1011 10:31:29.970239 4776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"d0cb5ca87b019c6dd7de016a32a463f61a07f9fbd819c59a6476d827605ded9c"} pod="openshift-ingress/router-default-5ddb89f76-57kcw" containerMessage="Container router failed startup probe, will be restarted" Oct 11 10:31:29.970808 master-2 kubenswrapper[4776]: I1011 10:31:29.970282 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" containerID="cri-o://d0cb5ca87b019c6dd7de016a32a463f61a07f9fbd819c59a6476d827605ded9c" gracePeriod=3600 Oct 11 10:31:41.589211 master-2 kubenswrapper[4776]: I1011 10:31:41.588890 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6"] Oct 11 10:31:41.589867 master-2 kubenswrapper[4776]: I1011 10:31:41.589182 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" containerID="cri-o://c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912" gracePeriod=120 Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: I1011 10:31:42.372590 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:31:42.372648 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:31:42.373218 master-2 kubenswrapper[4776]: I1011 10:31:42.372656 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:45.857919 master-2 kubenswrapper[4776]: E1011 10:31:45.857832 4776 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/etcd-pod.yaml\": /etc/kubernetes/manifests/etcd-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Oct 11 10:31:45.860377 master-2 kubenswrapper[4776]: I1011 10:31:45.860328 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:31:45.860561 master-2 kubenswrapper[4776]: E1011 10:31:45.860530 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" containerName="installer" Oct 11 10:31:45.860561 master-2 kubenswrapper[4776]: I1011 10:31:45.860551 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" containerName="installer" Oct 11 10:31:45.860657 master-2 kubenswrapper[4776]: I1011 10:31:45.860645 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd3d140-91cb-4ec4-91a0-ec45a87da4ea" containerName="installer" Oct 11 10:31:45.862197 master-2 kubenswrapper[4776]: I1011 10:31:45.862164 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:31:45.898518 master-2 kubenswrapper[4776]: I1011 10:31:45.898438 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:31:45.999322 master-2 kubenswrapper[4776]: I1011 10:31:45.999265 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-data-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:45.999322 master-2 kubenswrapper[4776]: I1011 10:31:45.999317 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-cert-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:45.999628 master-2 kubenswrapper[4776]: I1011 10:31:45.999346 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-log-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:45.999628 master-2 kubenswrapper[4776]: I1011 10:31:45.999409 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-usr-local-bin\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:45.999628 master-2 kubenswrapper[4776]: I1011 10:31:45.999432 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-resource-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:45.999628 master-2 kubenswrapper[4776]: I1011 10:31:45.999505 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-static-pod-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.079805 master-2 kubenswrapper[4776]: I1011 10:31:46.079741 4776 generic.go:334] "Generic (PLEG): container finished" podID="ff524bb0-602a-4579-bac9-c3f5c19ec9ba" containerID="5dcd1c043c2c18cfa07bcf1cba0c1e16ed116132974cf974809ac324fe8a6c21" exitCode=0 Oct 11 10:31:46.079805 master-2 kubenswrapper[4776]: I1011 10:31:46.079791 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-3-master-2" event={"ID":"ff524bb0-602a-4579-bac9-c3f5c19ec9ba","Type":"ContainerDied","Data":"5dcd1c043c2c18cfa07bcf1cba0c1e16ed116132974cf974809ac324fe8a6c21"} Oct 11 10:31:46.100196 master-2 kubenswrapper[4776]: I1011 10:31:46.100133 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-log-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100415 master-2 kubenswrapper[4776]: I1011 10:31:46.100239 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-usr-local-bin\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100415 master-2 kubenswrapper[4776]: I1011 10:31:46.100274 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-resource-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100415 master-2 kubenswrapper[4776]: I1011 10:31:46.100320 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-static-pod-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100415 master-2 kubenswrapper[4776]: I1011 10:31:46.100343 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-data-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100415 master-2 kubenswrapper[4776]: I1011 10:31:46.100371 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-cert-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100561 master-2 kubenswrapper[4776]: I1011 10:31:46.100443 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-cert-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100561 master-2 kubenswrapper[4776]: I1011 10:31:46.100484 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-log-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100561 master-2 kubenswrapper[4776]: I1011 10:31:46.100514 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-usr-local-bin\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100561 master-2 kubenswrapper[4776]: I1011 10:31:46.100545 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-resource-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100667 master-2 kubenswrapper[4776]: I1011 10:31:46.100576 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-static-pod-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.100667 master-2 kubenswrapper[4776]: I1011 10:31:46.100604 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-data-dir\") pod \"etcd-master-2\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.194923 master-2 kubenswrapper[4776]: I1011 10:31:46.194859 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:31:46.210254 master-2 kubenswrapper[4776]: W1011 10:31:46.210188 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc492168afa20f49cb6e3534e1871011b.slice/crio-0df9cdf55dcce811bcb02b907151466b6d03c26b87c72a12e09804928f72dd92 WatchSource:0}: Error finding container 0df9cdf55dcce811bcb02b907151466b6d03c26b87c72a12e09804928f72dd92: Status 404 returned error can't find the container with id 0df9cdf55dcce811bcb02b907151466b6d03c26b87c72a12e09804928f72dd92 Oct 11 10:31:46.211443 master-2 kubenswrapper[4776]: I1011 10:31:46.211419 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:31:47.086689 master-2 kubenswrapper[4776]: I1011 10:31:47.086616 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerStarted","Data":"0df9cdf55dcce811bcb02b907151466b6d03c26b87c72a12e09804928f72dd92"} Oct 11 10:31:47.304435 master-2 kubenswrapper[4776]: I1011 10:31:47.304161 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: I1011 10:31:47.370044 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:31:47.370176 master-2 kubenswrapper[4776]: I1011 10:31:47.370106 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:47.416757 master-2 kubenswrapper[4776]: I1011 10:31:47.416719 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kubelet-dir\") pod \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " Oct 11 10:31:47.416949 master-2 kubenswrapper[4776]: I1011 10:31:47.416929 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-var-lock\") pod \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " Oct 11 10:31:47.417094 master-2 kubenswrapper[4776]: I1011 10:31:47.417075 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kube-api-access\") pod \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\" (UID: \"ff524bb0-602a-4579-bac9-c3f5c19ec9ba\") " Oct 11 10:31:47.417264 master-2 kubenswrapper[4776]: I1011 10:31:47.416861 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ff524bb0-602a-4579-bac9-c3f5c19ec9ba" (UID: "ff524bb0-602a-4579-bac9-c3f5c19ec9ba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:47.417322 master-2 kubenswrapper[4776]: I1011 10:31:47.416998 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-var-lock" (OuterVolumeSpecName: "var-lock") pod "ff524bb0-602a-4579-bac9-c3f5c19ec9ba" (UID: "ff524bb0-602a-4579-bac9-c3f5c19ec9ba"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:31:47.417637 master-2 kubenswrapper[4776]: I1011 10:31:47.417618 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:47.417762 master-2 kubenswrapper[4776]: I1011 10:31:47.417747 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:47.420174 master-2 kubenswrapper[4776]: I1011 10:31:47.420120 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ff524bb0-602a-4579-bac9-c3f5c19ec9ba" (UID: "ff524bb0-602a-4579-bac9-c3f5c19ec9ba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:31:47.519531 master-2 kubenswrapper[4776]: I1011 10:31:47.519444 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff524bb0-602a-4579-bac9-c3f5c19ec9ba-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:31:47.788707 master-2 kubenswrapper[4776]: I1011 10:31:47.788656 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:31:48.093368 master-2 kubenswrapper[4776]: I1011 10:31:48.093330 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-3-master-2" event={"ID":"ff524bb0-602a-4579-bac9-c3f5c19ec9ba","Type":"ContainerDied","Data":"caf35abea272da4e2eae59b8aeddbb02fa8fc9c63850f420fe7ddea3a3fbfdf7"} Oct 11 10:31:48.093368 master-2 kubenswrapper[4776]: I1011 10:31:48.093369 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf35abea272da4e2eae59b8aeddbb02fa8fc9c63850f420fe7ddea3a3fbfdf7" Oct 11 10:31:48.093946 master-2 kubenswrapper[4776]: I1011 10:31:48.093385 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-3-master-2" Oct 11 10:31:49.100947 master-2 kubenswrapper[4776]: I1011 10:31:49.100907 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738" exitCode=0 Oct 11 10:31:49.101419 master-2 kubenswrapper[4776]: I1011 10:31:49.100989 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerDied","Data":"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738"} Oct 11 10:31:50.107826 master-2 kubenswrapper[4776]: I1011 10:31:50.107714 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9" exitCode=0 Oct 11 10:31:50.107826 master-2 kubenswrapper[4776]: I1011 10:31:50.107801 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerDied","Data":"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9"} Oct 11 10:31:51.121308 master-2 kubenswrapper[4776]: I1011 10:31:51.121180 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607" exitCode=0 Oct 11 10:31:51.121308 master-2 kubenswrapper[4776]: I1011 10:31:51.121272 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerDied","Data":"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607"} Oct 11 10:31:51.695442 master-2 kubenswrapper[4776]: I1011 10:31:51.695383 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-wmcqt"] Oct 11 10:31:51.695885 master-2 kubenswrapper[4776]: I1011 10:31:51.695841 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" containerID="cri-o://a486fb47915dcf562b4049ef498fba0a79dc2f0d9c2b35c61e3a77be9dcdeae7" gracePeriod=120 Oct 11 10:31:51.696135 master-2 kubenswrapper[4776]: I1011 10:31:51.696022 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://2fdacc499227869c48e6c020ccf86a1927bcc28943d27adf9666ea1d0e17f652" gracePeriod=120 Oct 11 10:31:52.130140 master-2 kubenswrapper[4776]: I1011 10:31:52.130058 4776 generic.go:334] "Generic (PLEG): container finished" podID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerID="2fdacc499227869c48e6c020ccf86a1927bcc28943d27adf9666ea1d0e17f652" exitCode=0 Oct 11 10:31:52.130140 master-2 kubenswrapper[4776]: I1011 10:31:52.130111 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerDied","Data":"2fdacc499227869c48e6c020ccf86a1927bcc28943d27adf9666ea1d0e17f652"} Oct 11 10:31:52.132134 master-2 kubenswrapper[4776]: I1011 10:31:52.132094 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/0.log" Oct 11 10:31:52.133138 master-2 kubenswrapper[4776]: I1011 10:31:52.133101 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="5d99e5586f3f25c98384bcaec74505355d716d542f0d5177b62c41981c0f4f1f" exitCode=1 Oct 11 10:31:52.133138 master-2 kubenswrapper[4776]: I1011 10:31:52.133126 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerStarted","Data":"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6"} Oct 11 10:31:52.133138 master-2 kubenswrapper[4776]: I1011 10:31:52.133139 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerDied","Data":"5d99e5586f3f25c98384bcaec74505355d716d542f0d5177b62c41981c0f4f1f"} Oct 11 10:31:52.133333 master-2 kubenswrapper[4776]: I1011 10:31:52.133149 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerStarted","Data":"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8"} Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: I1011 10:31:52.372552 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:31:52.372686 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:31:52.373778 master-2 kubenswrapper[4776]: I1011 10:31:52.373348 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:52.373778 master-2 kubenswrapper[4776]: I1011 10:31:52.373437 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: I1011 10:31:52.929345 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:31:52.929497 master-2 kubenswrapper[4776]: I1011 10:31:52.929440 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:53.140720 master-2 kubenswrapper[4776]: I1011 10:31:53.140687 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/0.log" Oct 11 10:31:53.142100 master-2 kubenswrapper[4776]: I1011 10:31:53.142071 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerStarted","Data":"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d"} Oct 11 10:31:53.142175 master-2 kubenswrapper[4776]: I1011 10:31:53.142116 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerStarted","Data":"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7"} Oct 11 10:31:53.142552 master-2 kubenswrapper[4776]: I1011 10:31:53.142524 4776 scope.go:117] "RemoveContainer" containerID="5d99e5586f3f25c98384bcaec74505355d716d542f0d5177b62c41981c0f4f1f" Oct 11 10:31:54.150118 master-2 kubenswrapper[4776]: I1011 10:31:54.150068 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/1.log" Oct 11 10:31:54.154793 master-2 kubenswrapper[4776]: I1011 10:31:54.154768 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/0.log" Oct 11 10:31:54.156900 master-2 kubenswrapper[4776]: I1011 10:31:54.156847 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" exitCode=1 Oct 11 10:31:54.156900 master-2 kubenswrapper[4776]: I1011 10:31:54.156887 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerDied","Data":"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc"} Oct 11 10:31:54.157033 master-2 kubenswrapper[4776]: I1011 10:31:54.156922 4776 scope.go:117] "RemoveContainer" containerID="5d99e5586f3f25c98384bcaec74505355d716d542f0d5177b62c41981c0f4f1f" Oct 11 10:31:54.157750 master-2 kubenswrapper[4776]: I1011 10:31:54.157717 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:31:54.158185 master-2 kubenswrapper[4776]: E1011 10:31:54.158154 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-2_openshift-etcd(c492168afa20f49cb6e3534e1871011b)\"" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" Oct 11 10:31:55.163938 master-2 kubenswrapper[4776]: I1011 10:31:55.163887 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/1.log" Oct 11 10:31:55.168391 master-2 kubenswrapper[4776]: I1011 10:31:55.168370 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:31:55.168799 master-2 kubenswrapper[4776]: E1011 10:31:55.168777 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-2_openshift-etcd(c492168afa20f49cb6e3534e1871011b)\"" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" Oct 11 10:31:56.195995 master-2 kubenswrapper[4776]: I1011 10:31:56.195928 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-2" Oct 11 10:31:56.195995 master-2 kubenswrapper[4776]: I1011 10:31:56.195974 4776 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd/etcd-master-2" Oct 11 10:31:56.195995 master-2 kubenswrapper[4776]: I1011 10:31:56.195985 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-2" Oct 11 10:31:56.195995 master-2 kubenswrapper[4776]: I1011 10:31:56.195995 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-2" Oct 11 10:31:56.196910 master-2 kubenswrapper[4776]: I1011 10:31:56.196588 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:31:56.196910 master-2 kubenswrapper[4776]: E1011 10:31:56.196875 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-2_openshift-etcd(c492168afa20f49cb6e3534e1871011b)\"" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: I1011 10:31:57.371543 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:31:57.371632 master-2 kubenswrapper[4776]: I1011 10:31:57.371618 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:57.707236 master-2 kubenswrapper[4776]: I1011 10:31:57.707168 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-guard-master-2"] Oct 11 10:31:57.708218 master-2 kubenswrapper[4776]: E1011 10:31:57.707403 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff524bb0-602a-4579-bac9-c3f5c19ec9ba" containerName="installer" Oct 11 10:31:57.708218 master-2 kubenswrapper[4776]: I1011 10:31:57.707418 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff524bb0-602a-4579-bac9-c3f5c19ec9ba" containerName="installer" Oct 11 10:31:57.708218 master-2 kubenswrapper[4776]: I1011 10:31:57.707535 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff524bb0-602a-4579-bac9-c3f5c19ec9ba" containerName="installer" Oct 11 10:31:57.708218 master-2 kubenswrapper[4776]: I1011 10:31:57.708005 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:31:57.711406 master-2 kubenswrapper[4776]: I1011 10:31:57.711370 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"openshift-service-ca.crt" Oct 11 10:31:57.712878 master-2 kubenswrapper[4776]: I1011 10:31:57.712847 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Oct 11 10:31:57.725821 master-2 kubenswrapper[4776]: I1011 10:31:57.721830 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-2"] Oct 11 10:31:57.758305 master-2 kubenswrapper[4776]: I1011 10:31:57.758236 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbsb4\" (UniqueName: \"kubernetes.io/projected/9314095b-1661-46bd-8e19-2741d9d758fa-kube-api-access-gbsb4\") pod \"etcd-guard-master-2\" (UID: \"9314095b-1661-46bd-8e19-2741d9d758fa\") " pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:31:57.859453 master-2 kubenswrapper[4776]: I1011 10:31:57.859344 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbsb4\" (UniqueName: \"kubernetes.io/projected/9314095b-1661-46bd-8e19-2741d9d758fa-kube-api-access-gbsb4\") pod \"etcd-guard-master-2\" (UID: \"9314095b-1661-46bd-8e19-2741d9d758fa\") " pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:31:57.878230 master-2 kubenswrapper[4776]: I1011 10:31:57.878159 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbsb4\" (UniqueName: \"kubernetes.io/projected/9314095b-1661-46bd-8e19-2741d9d758fa-kube-api-access-gbsb4\") pod \"etcd-guard-master-2\" (UID: \"9314095b-1661-46bd-8e19-2741d9d758fa\") " pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: I1011 10:31:57.930523 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:31:57.930641 master-2 kubenswrapper[4776]: I1011 10:31:57.930639 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:31:58.030505 master-2 kubenswrapper[4776]: I1011 10:31:58.030331 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:31:58.525891 master-2 kubenswrapper[4776]: I1011 10:31:58.525750 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-2"] Oct 11 10:31:59.207260 master-2 kubenswrapper[4776]: I1011 10:31:59.207202 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-guard-master-2" event={"ID":"9314095b-1661-46bd-8e19-2741d9d758fa","Type":"ContainerStarted","Data":"dd92959466cbdad70a80055ac1e16987cd678122f01b686d6b49af348560fd6b"} Oct 11 10:31:59.207260 master-2 kubenswrapper[4776]: I1011 10:31:59.207248 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-guard-master-2" event={"ID":"9314095b-1661-46bd-8e19-2741d9d758fa","Type":"ContainerStarted","Data":"5bf637cb5c71d3038537bacbd0d464910a2872b51f4584c72a5dd453860e55c5"} Oct 11 10:31:59.207589 master-2 kubenswrapper[4776]: I1011 10:31:59.207514 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:31:59.231469 master-2 kubenswrapper[4776]: I1011 10:31:59.228892 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-guard-master-2" podStartSLOduration=2.2288674 podStartE2EDuration="2.2288674s" podCreationTimestamp="2025-10-11 10:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:31:59.225521989 +0000 UTC m=+354.009948728" watchObservedRunningTime="2025-10-11 10:31:59.2288674 +0000 UTC m=+354.013294149" Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: I1011 10:32:02.371479 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:02.371544 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:02.372350 master-2 kubenswrapper[4776]: I1011 10:32:02.371548 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: I1011 10:32:02.930451 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:02.930548 master-2 kubenswrapper[4776]: I1011 10:32:02.930527 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:02.932075 master-2 kubenswrapper[4776]: I1011 10:32:02.930656 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:32:03.560981 master-2 kubenswrapper[4776]: I1011 10:32:03.560933 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-2"] Oct 11 10:32:03.562584 master-2 kubenswrapper[4776]: I1011 10:32:03.562562 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.567057 master-2 kubenswrapper[4776]: I1011 10:32:03.567008 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 10:32:03.573486 master-2 kubenswrapper[4776]: I1011 10:32:03.573443 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-2"] Oct 11 10:32:03.664289 master-2 kubenswrapper[4776]: I1011 10:32:03.664207 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-var-lock\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.664289 master-2 kubenswrapper[4776]: I1011 10:32:03.664277 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kube-api-access\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.664289 master-2 kubenswrapper[4776]: I1011 10:32:03.664304 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kubelet-dir\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.765965 master-2 kubenswrapper[4776]: I1011 10:32:03.765911 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-var-lock\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.766342 master-2 kubenswrapper[4776]: I1011 10:32:03.766310 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kube-api-access\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.766527 master-2 kubenswrapper[4776]: I1011 10:32:03.766501 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kubelet-dir\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.766734 master-2 kubenswrapper[4776]: I1011 10:32:03.766637 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kubelet-dir\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.766903 master-2 kubenswrapper[4776]: I1011 10:32:03.766068 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-var-lock\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.787461 master-2 kubenswrapper[4776]: I1011 10:32:03.787386 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kube-api-access\") pod \"installer-4-master-2\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:03.887441 master-2 kubenswrapper[4776]: I1011 10:32:03.887261 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:04.208582 master-2 kubenswrapper[4776]: I1011 10:32:04.208480 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:32:04.208850 master-2 kubenswrapper[4776]: I1011 10:32:04.208621 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:32:04.339947 master-2 kubenswrapper[4776]: I1011 10:32:04.339864 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-2"] Oct 11 10:32:04.347182 master-2 kubenswrapper[4776]: W1011 10:32:04.347100 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod860e97e2_10a4_4a16_ac4e_4a0fc7490200.slice/crio-2a0c267eb08670f3fe6902b94f01392021a837de4570d7e530b15613d10477eb WatchSource:0}: Error finding container 2a0c267eb08670f3fe6902b94f01392021a837de4570d7e530b15613d10477eb: Status 404 returned error can't find the container with id 2a0c267eb08670f3fe6902b94f01392021a837de4570d7e530b15613d10477eb Oct 11 10:32:04.710386 master-2 kubenswrapper[4776]: I1011 10:32:04.710253 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-2"] Oct 11 10:32:05.243201 master-2 kubenswrapper[4776]: I1011 10:32:05.243148 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-2" event={"ID":"860e97e2-10a4-4a16-ac4e-4a0fc7490200","Type":"ContainerStarted","Data":"26ed7de5d40500630aec8d65a4a8355cae8b9e280b195543aa3fdc8cfbc315c3"} Oct 11 10:32:05.243201 master-2 kubenswrapper[4776]: I1011 10:32:05.243202 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-2" event={"ID":"860e97e2-10a4-4a16-ac4e-4a0fc7490200","Type":"ContainerStarted","Data":"2a0c267eb08670f3fe6902b94f01392021a837de4570d7e530b15613d10477eb"} Oct 11 10:32:05.270284 master-2 kubenswrapper[4776]: I1011 10:32:05.270095 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-2" podStartSLOduration=2.2700683010000002 podStartE2EDuration="2.270068301s" podCreationTimestamp="2025-10-11 10:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:05.268605981 +0000 UTC m=+360.053032680" watchObservedRunningTime="2025-10-11 10:32:05.270068301 +0000 UTC m=+360.054495040" Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: I1011 10:32:07.374191 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:07.374292 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:07.376228 master-2 kubenswrapper[4776]: I1011 10:32:07.374360 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: I1011 10:32:07.933204 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:07.933360 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:07.934972 master-2 kubenswrapper[4776]: I1011 10:32:07.933357 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:08.059532 master-2 kubenswrapper[4776]: I1011 10:32:08.059420 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:32:09.209762 master-2 kubenswrapper[4776]: I1011 10:32:09.209667 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:32:09.210442 master-2 kubenswrapper[4776]: I1011 10:32:09.209780 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:32:09.274411 master-2 kubenswrapper[4776]: I1011 10:32:09.274260 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/1.log" Oct 11 10:32:09.277393 master-2 kubenswrapper[4776]: I1011 10:32:09.277353 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"c492168afa20f49cb6e3534e1871011b","Type":"ContainerStarted","Data":"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737"} Oct 11 10:32:09.324075 master-2 kubenswrapper[4776]: I1011 10:32:09.323988 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-2" podStartSLOduration=24.323957941 podStartE2EDuration="24.323957941s" podCreationTimestamp="2025-10-11 10:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:09.318849952 +0000 UTC m=+364.103276671" watchObservedRunningTime="2025-10-11 10:32:09.323957941 +0000 UTC m=+364.108384660" Oct 11 10:32:10.629542 master-2 kubenswrapper[4776]: I1011 10:32:10.629462 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:32:11.195495 master-2 kubenswrapper[4776]: I1011 10:32:11.195429 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-2" Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: I1011 10:32:12.373204 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:12.373274 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:12.374498 master-2 kubenswrapper[4776]: I1011 10:32:12.373292 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: I1011 10:32:12.931852 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:12.931931 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:12.932630 master-2 kubenswrapper[4776]: I1011 10:32:12.931955 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:16.197232 master-2 kubenswrapper[4776]: I1011 10:32:16.195426 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-2" Oct 11 10:32:16.316916 master-2 kubenswrapper[4776]: I1011 10:32:16.316851 4776 generic.go:334] "Generic (PLEG): container finished" podID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerID="d0cb5ca87b019c6dd7de016a32a463f61a07f9fbd819c59a6476d827605ded9c" exitCode=0 Oct 11 10:32:16.317081 master-2 kubenswrapper[4776]: I1011 10:32:16.316901 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerDied","Data":"d0cb5ca87b019c6dd7de016a32a463f61a07f9fbd819c59a6476d827605ded9c"} Oct 11 10:32:17.327592 master-2 kubenswrapper[4776]: I1011 10:32:17.327522 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerStarted","Data":"532aa417a81b450c3ee53605926d217d88d13c85c6a1d9e5ea21cc1e35ca9346"} Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: I1011 10:32:17.378915 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:17.378981 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:17.380054 master-2 kubenswrapper[4776]: I1011 10:32:17.380007 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:17.808207 master-2 kubenswrapper[4776]: I1011 10:32:17.808155 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: I1011 10:32:17.936939 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:17.937009 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:17.938487 master-2 kubenswrapper[4776]: I1011 10:32:17.937874 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:17.967378 master-2 kubenswrapper[4776]: I1011 10:32:17.967310 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:32:17.967378 master-2 kubenswrapper[4776]: I1011 10:32:17.967381 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:32:17.971420 master-2 kubenswrapper[4776]: I1011 10:32:17.971339 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:17.971420 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:17.971420 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:17.971420 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:17.971652 master-2 kubenswrapper[4776]: I1011 10:32:17.971469 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:18.970838 master-2 kubenswrapper[4776]: I1011 10:32:18.970705 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:18.970838 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:18.970838 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:18.970838 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:18.971913 master-2 kubenswrapper[4776]: I1011 10:32:18.970851 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:19.969836 master-2 kubenswrapper[4776]: I1011 10:32:19.969747 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:19.969836 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:19.969836 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:19.969836 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:19.970420 master-2 kubenswrapper[4776]: I1011 10:32:19.969847 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:20.970511 master-2 kubenswrapper[4776]: I1011 10:32:20.970468 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:20.970511 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:20.970511 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:20.970511 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:20.971313 master-2 kubenswrapper[4776]: I1011 10:32:20.971277 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:21.970169 master-2 kubenswrapper[4776]: I1011 10:32:21.970063 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:21.970169 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:21.970169 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:21.970169 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:21.970169 master-2 kubenswrapper[4776]: I1011 10:32:21.970156 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: I1011 10:32:22.377930 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:22.378079 master-2 kubenswrapper[4776]: I1011 10:32:22.378006 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: I1011 10:32:22.932396 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:22.932469 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:22.933598 master-2 kubenswrapper[4776]: I1011 10:32:22.932474 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:22.969703 master-2 kubenswrapper[4776]: I1011 10:32:22.969606 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:22.969703 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:22.969703 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:22.969703 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:22.969977 master-2 kubenswrapper[4776]: I1011 10:32:22.969704 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:23.968961 master-2 kubenswrapper[4776]: I1011 10:32:23.968907 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:23.968961 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:23.968961 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:23.968961 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:23.969577 master-2 kubenswrapper[4776]: I1011 10:32:23.968984 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:24.969787 master-2 kubenswrapper[4776]: I1011 10:32:24.969727 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:24.969787 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:24.969787 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:24.969787 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:24.970849 master-2 kubenswrapper[4776]: I1011 10:32:24.969789 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:25.969667 master-2 kubenswrapper[4776]: I1011 10:32:25.969606 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:25.969667 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:25.969667 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:25.969667 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:25.969667 master-2 kubenswrapper[4776]: I1011 10:32:25.969662 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:26.213669 master-2 kubenswrapper[4776]: I1011 10:32:26.213581 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-2" Oct 11 10:32:26.231262 master-2 kubenswrapper[4776]: I1011 10:32:26.231067 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-2" Oct 11 10:32:26.970122 master-2 kubenswrapper[4776]: I1011 10:32:26.970038 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:26.970122 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:26.970122 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:26.970122 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:26.971065 master-2 kubenswrapper[4776]: I1011 10:32:26.970122 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: I1011 10:32:27.374812 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:27.375021 master-2 kubenswrapper[4776]: I1011 10:32:27.374922 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: I1011 10:32:27.929490 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:27.929558 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:27.930408 master-2 kubenswrapper[4776]: I1011 10:32:27.929578 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:27.969247 master-2 kubenswrapper[4776]: I1011 10:32:27.969158 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:27.969247 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:27.969247 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:27.969247 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:27.969247 master-2 kubenswrapper[4776]: I1011 10:32:27.969233 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:28.969767 master-2 kubenswrapper[4776]: I1011 10:32:28.969710 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:28.969767 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:28.969767 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:28.969767 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:28.970920 master-2 kubenswrapper[4776]: I1011 10:32:28.970877 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:29.969474 master-2 kubenswrapper[4776]: I1011 10:32:29.969382 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:29.969474 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:29.969474 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:29.969474 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:29.969474 master-2 kubenswrapper[4776]: I1011 10:32:29.969456 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:30.969645 master-2 kubenswrapper[4776]: I1011 10:32:30.969577 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:30.969645 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:30.969645 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:30.969645 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:30.970428 master-2 kubenswrapper[4776]: I1011 10:32:30.969648 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:31.969458 master-2 kubenswrapper[4776]: I1011 10:32:31.969390 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:31.969458 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:31.969458 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:31.969458 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:31.969836 master-2 kubenswrapper[4776]: I1011 10:32:31.969458 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:32.367945 master-2 kubenswrapper[4776]: I1011 10:32:32.367743 4776 patch_prober.go:28] interesting pod/apiserver-65b6f4d4c9-5wrz6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.42:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.42:8443: connect: connection refused" start-of-body= Oct 11 10:32:32.367945 master-2 kubenswrapper[4776]: I1011 10:32:32.367842 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.42:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.42:8443: connect: connection refused" Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: I1011 10:32:32.929292 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:32.929353 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:32.930174 master-2 kubenswrapper[4776]: I1011 10:32:32.929367 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:32.969560 master-2 kubenswrapper[4776]: I1011 10:32:32.969402 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:32.969560 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:32.969560 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:32.969560 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:32.969560 master-2 kubenswrapper[4776]: I1011 10:32:32.969464 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:33.100717 master-2 kubenswrapper[4776]: I1011 10:32:33.100145 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:32:33.141318 master-2 kubenswrapper[4776]: I1011 10:32:33.141268 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729"] Oct 11 10:32:33.141493 master-2 kubenswrapper[4776]: E1011 10:32:33.141442 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="fix-audit-permissions" Oct 11 10:32:33.141493 master-2 kubenswrapper[4776]: I1011 10:32:33.141454 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="fix-audit-permissions" Oct 11 10:32:33.141493 master-2 kubenswrapper[4776]: E1011 10:32:33.141470 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" Oct 11 10:32:33.141493 master-2 kubenswrapper[4776]: I1011 10:32:33.141476 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" Oct 11 10:32:33.141614 master-2 kubenswrapper[4776]: I1011 10:32:33.141567 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e350b624-6581-4982-96f3-cd5c37256e02" containerName="oauth-apiserver" Oct 11 10:32:33.142082 master-2 kubenswrapper[4776]: I1011 10:32:33.142057 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.153325 master-2 kubenswrapper[4776]: I1011 10:32:33.153283 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729"] Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164298 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-encryption-config\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164331 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-etcd-client\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164386 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e350b624-6581-4982-96f3-cd5c37256e02-audit-dir\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164405 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-etcd-serving-ca\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164427 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-trusted-ca-bundle\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164424 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e350b624-6581-4982-96f3-cd5c37256e02-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164445 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqlgq\" (UniqueName: \"kubernetes.io/projected/e350b624-6581-4982-96f3-cd5c37256e02-kube-api-access-tqlgq\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.164460 master-2 kubenswrapper[4776]: I1011 10:32:33.164467 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-audit-policies\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.164616 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") pod \"e350b624-6581-4982-96f3-cd5c37256e02\" (UID: \"e350b624-6581-4982-96f3-cd5c37256e02\") " Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.164771 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-audit-policies\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.164808 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-encryption-config\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.164827 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54fhv\" (UniqueName: \"kubernetes.io/projected/cc095688-9188-4472-9c26-d4d286e5ef06-kube-api-access-54fhv\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165036 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165091 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-serving-cert\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165161 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc095688-9188-4472-9c26-d4d286e5ef06-audit-dir\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165187 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-client\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165213 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165221 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165283 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165346 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e350b624-6581-4982-96f3-cd5c37256e02-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165359 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.165366 master-2 kubenswrapper[4776]: I1011 10:32:33.165369 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.165780 master-2 kubenswrapper[4776]: I1011 10:32:33.165591 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:33.170185 master-2 kubenswrapper[4776]: I1011 10:32:33.170072 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:32:33.170875 master-2 kubenswrapper[4776]: I1011 10:32:33.170831 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:32:33.170976 master-2 kubenswrapper[4776]: I1011 10:32:33.170923 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e350b624-6581-4982-96f3-cd5c37256e02-kube-api-access-tqlgq" (OuterVolumeSpecName: "kube-api-access-tqlgq") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "kube-api-access-tqlgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:32:33.171118 master-2 kubenswrapper[4776]: I1011 10:32:33.171080 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "e350b624-6581-4982-96f3-cd5c37256e02" (UID: "e350b624-6581-4982-96f3-cd5c37256e02"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:32:33.267029 master-2 kubenswrapper[4776]: I1011 10:32:33.266889 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267029 master-2 kubenswrapper[4776]: I1011 10:32:33.266956 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-audit-policies\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267029 master-2 kubenswrapper[4776]: I1011 10:32:33.267003 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-encryption-config\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267029 master-2 kubenswrapper[4776]: I1011 10:32:33.267029 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54fhv\" (UniqueName: \"kubernetes.io/projected/cc095688-9188-4472-9c26-d4d286e5ef06-kube-api-access-54fhv\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267076 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267104 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-serving-cert\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267167 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc095688-9188-4472-9c26-d4d286e5ef06-audit-dir\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267191 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-client\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267237 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267253 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267265 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e350b624-6581-4982-96f3-cd5c37256e02-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267278 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqlgq\" (UniqueName: \"kubernetes.io/projected/e350b624-6581-4982-96f3-cd5c37256e02-kube-api-access-tqlgq\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.267287 master-2 kubenswrapper[4776]: I1011 10:32:33.267292 4776 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e350b624-6581-4982-96f3-cd5c37256e02-audit-policies\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:33.267844 master-2 kubenswrapper[4776]: I1011 10:32:33.267781 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc095688-9188-4472-9c26-d4d286e5ef06-audit-dir\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.268358 master-2 kubenswrapper[4776]: I1011 10:32:33.268311 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-audit-policies\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.268551 master-2 kubenswrapper[4776]: I1011 10:32:33.268519 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.268595 master-2 kubenswrapper[4776]: I1011 10:32:33.268517 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.270699 master-2 kubenswrapper[4776]: I1011 10:32:33.270646 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-serving-cert\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.270942 master-2 kubenswrapper[4776]: I1011 10:32:33.270913 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-client\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.271706 master-2 kubenswrapper[4776]: I1011 10:32:33.271650 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-encryption-config\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.299002 master-2 kubenswrapper[4776]: I1011 10:32:33.298929 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54fhv\" (UniqueName: \"kubernetes.io/projected/cc095688-9188-4472-9c26-d4d286e5ef06-kube-api-access-54fhv\") pod \"apiserver-68f4c55ff4-tv729\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.421571 master-2 kubenswrapper[4776]: I1011 10:32:33.421487 4776 generic.go:334] "Generic (PLEG): container finished" podID="e350b624-6581-4982-96f3-cd5c37256e02" containerID="c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912" exitCode=0 Oct 11 10:32:33.421571 master-2 kubenswrapper[4776]: I1011 10:32:33.421532 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" event={"ID":"e350b624-6581-4982-96f3-cd5c37256e02","Type":"ContainerDied","Data":"c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912"} Oct 11 10:32:33.421571 master-2 kubenswrapper[4776]: I1011 10:32:33.421559 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" Oct 11 10:32:33.421571 master-2 kubenswrapper[4776]: I1011 10:32:33.421568 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6" event={"ID":"e350b624-6581-4982-96f3-cd5c37256e02","Type":"ContainerDied","Data":"7123427ff4a739a7b15f9487edb2172df73189bcf0b6f9273cbe9a3faa4de58f"} Oct 11 10:32:33.421571 master-2 kubenswrapper[4776]: I1011 10:32:33.421585 4776 scope.go:117] "RemoveContainer" containerID="c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912" Oct 11 10:32:33.445487 master-2 kubenswrapper[4776]: I1011 10:32:33.445408 4776 scope.go:117] "RemoveContainer" containerID="e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278" Oct 11 10:32:33.458927 master-2 kubenswrapper[4776]: I1011 10:32:33.454426 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:33.470829 master-2 kubenswrapper[4776]: I1011 10:32:33.470737 4776 scope.go:117] "RemoveContainer" containerID="c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912" Oct 11 10:32:33.472099 master-2 kubenswrapper[4776]: I1011 10:32:33.471897 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6"] Oct 11 10:32:33.472356 master-2 kubenswrapper[4776]: E1011 10:32:33.472279 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912\": container with ID starting with c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912 not found: ID does not exist" containerID="c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912" Oct 11 10:32:33.472409 master-2 kubenswrapper[4776]: I1011 10:32:33.472361 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912"} err="failed to get container status \"c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912\": rpc error: code = NotFound desc = could not find container \"c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912\": container with ID starting with c4d78c91b7b925370318b3894cd36c2db2fdad070d8b33ed083926b606596912 not found: ID does not exist" Oct 11 10:32:33.472409 master-2 kubenswrapper[4776]: I1011 10:32:33.472396 4776 scope.go:117] "RemoveContainer" containerID="e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278" Oct 11 10:32:33.472836 master-2 kubenswrapper[4776]: E1011 10:32:33.472776 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278\": container with ID starting with e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278 not found: ID does not exist" containerID="e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278" Oct 11 10:32:33.472836 master-2 kubenswrapper[4776]: I1011 10:32:33.472806 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278"} err="failed to get container status \"e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278\": rpc error: code = NotFound desc = could not find container \"e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278\": container with ID starting with e63a68a9ce6429385cd633344bba068b7bcee26a7703c223cb15b256f8b6c278 not found: ID does not exist" Oct 11 10:32:33.479892 master-2 kubenswrapper[4776]: I1011 10:32:33.479860 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6"] Oct 11 10:32:33.916519 master-2 kubenswrapper[4776]: I1011 10:32:33.916457 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729"] Oct 11 10:32:33.970120 master-2 kubenswrapper[4776]: I1011 10:32:33.970034 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:33.970120 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:33.970120 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:33.970120 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:33.970120 master-2 kubenswrapper[4776]: I1011 10:32:33.970107 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:34.065552 master-2 kubenswrapper[4776]: I1011 10:32:34.064901 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e350b624-6581-4982-96f3-cd5c37256e02" path="/var/lib/kubelet/pods/e350b624-6581-4982-96f3-cd5c37256e02/volumes" Oct 11 10:32:34.429110 master-2 kubenswrapper[4776]: I1011 10:32:34.429066 4776 generic.go:334] "Generic (PLEG): container finished" podID="cc095688-9188-4472-9c26-d4d286e5ef06" containerID="890baf1a750c905b81b3a86397294058183d567d6c2fdd860242c1e809168b9e" exitCode=0 Oct 11 10:32:34.429612 master-2 kubenswrapper[4776]: I1011 10:32:34.429119 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" event={"ID":"cc095688-9188-4472-9c26-d4d286e5ef06","Type":"ContainerDied","Data":"890baf1a750c905b81b3a86397294058183d567d6c2fdd860242c1e809168b9e"} Oct 11 10:32:34.429612 master-2 kubenswrapper[4776]: I1011 10:32:34.429162 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" event={"ID":"cc095688-9188-4472-9c26-d4d286e5ef06","Type":"ContainerStarted","Data":"38caa553aa3028fefa0c3bd77280e5deedf30358e11b27817863ca0e8b11f26f"} Oct 11 10:32:34.972737 master-2 kubenswrapper[4776]: I1011 10:32:34.972616 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:34.972737 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:34.972737 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:34.972737 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:34.972737 master-2 kubenswrapper[4776]: I1011 10:32:34.972714 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:35.441960 master-2 kubenswrapper[4776]: I1011 10:32:35.441878 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" event={"ID":"cc095688-9188-4472-9c26-d4d286e5ef06","Type":"ContainerStarted","Data":"6dd550d507d66e801941ec8d8dccd203204326eb4fa9e98d9d9de574d26fd168"} Oct 11 10:32:35.473707 master-2 kubenswrapper[4776]: I1011 10:32:35.473583 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podStartSLOduration=20.473561927 podStartE2EDuration="20.473561927s" podCreationTimestamp="2025-10-11 10:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:35.468932542 +0000 UTC m=+390.253359271" watchObservedRunningTime="2025-10-11 10:32:35.473561927 +0000 UTC m=+390.257988666" Oct 11 10:32:35.969464 master-2 kubenswrapper[4776]: I1011 10:32:35.969392 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:35.969464 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:35.969464 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:35.969464 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:35.969896 master-2 kubenswrapper[4776]: I1011 10:32:35.969486 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:36.969872 master-2 kubenswrapper[4776]: I1011 10:32:36.969716 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:36.969872 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:36.969872 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:36.969872 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:36.969872 master-2 kubenswrapper[4776]: I1011 10:32:36.969825 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:37.587871 master-2 kubenswrapper[4776]: E1011 10:32:37.587811 4776 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-controller-manager-pod.yaml\": /etc/kubernetes/manifests/kube-controller-manager-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Oct 11 10:32:37.591278 master-2 kubenswrapper[4776]: I1011 10:32:37.591211 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:32:37.594069 master-2 kubenswrapper[4776]: I1011 10:32:37.594025 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.643226 master-2 kubenswrapper[4776]: I1011 10:32:37.643165 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:32:37.646639 master-2 kubenswrapper[4776]: I1011 10:32:37.646602 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"4f88b73b0d121e855641834122063be9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.646704 master-2 kubenswrapper[4776]: I1011 10:32:37.646661 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"4f88b73b0d121e855641834122063be9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.747897 master-2 kubenswrapper[4776]: I1011 10:32:37.747825 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"4f88b73b0d121e855641834122063be9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.747897 master-2 kubenswrapper[4776]: I1011 10:32:37.747885 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"4f88b73b0d121e855641834122063be9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.748289 master-2 kubenswrapper[4776]: I1011 10:32:37.747953 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"4f88b73b0d121e855641834122063be9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.748289 master-2 kubenswrapper[4776]: I1011 10:32:37.747985 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"4f88b73b0d121e855641834122063be9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: I1011 10:32:37.932573 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:37.932745 master-2 kubenswrapper[4776]: I1011 10:32:37.932635 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:37.942165 master-2 kubenswrapper[4776]: I1011 10:32:37.942082 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:37.970203 master-2 kubenswrapper[4776]: I1011 10:32:37.970130 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:37.970203 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:37.970203 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:37.970203 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:37.971115 master-2 kubenswrapper[4776]: I1011 10:32:37.970223 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:37.976463 master-2 kubenswrapper[4776]: W1011 10:32:37.976375 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f88b73b0d121e855641834122063be9.slice/crio-916ddd0f284b303e9bb4961df811012eddf3484459c699dfd12d49002c642155 WatchSource:0}: Error finding container 916ddd0f284b303e9bb4961df811012eddf3484459c699dfd12d49002c642155: Status 404 returned error can't find the container with id 916ddd0f284b303e9bb4961df811012eddf3484459c699dfd12d49002c642155 Oct 11 10:32:38.455048 master-2 kubenswrapper[4776]: I1011 10:32:38.454984 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:38.455394 master-2 kubenswrapper[4776]: I1011 10:32:38.455367 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:38.467578 master-2 kubenswrapper[4776]: I1011 10:32:38.467538 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:38.469795 master-2 kubenswrapper[4776]: I1011 10:32:38.469752 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"4f88b73b0d121e855641834122063be9","Type":"ContainerStarted","Data":"916ddd0f284b303e9bb4961df811012eddf3484459c699dfd12d49002c642155"} Oct 11 10:32:38.472361 master-2 kubenswrapper[4776]: I1011 10:32:38.472297 4776 generic.go:334] "Generic (PLEG): container finished" podID="860e97e2-10a4-4a16-ac4e-4a0fc7490200" containerID="26ed7de5d40500630aec8d65a4a8355cae8b9e280b195543aa3fdc8cfbc315c3" exitCode=0 Oct 11 10:32:38.472501 master-2 kubenswrapper[4776]: I1011 10:32:38.472373 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-2" event={"ID":"860e97e2-10a4-4a16-ac4e-4a0fc7490200","Type":"ContainerDied","Data":"26ed7de5d40500630aec8d65a4a8355cae8b9e280b195543aa3fdc8cfbc315c3"} Oct 11 10:32:38.479527 master-2 kubenswrapper[4776]: I1011 10:32:38.479487 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:32:38.644866 master-2 kubenswrapper[4776]: E1011 10:32:38.644745 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" podUID="17bef070-1a9d-4090-b97a-7ce2c1c93b19" Oct 11 10:32:38.968993 master-2 kubenswrapper[4776]: I1011 10:32:38.968891 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:38.968993 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:38.968993 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:38.968993 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:38.968993 master-2 kubenswrapper[4776]: I1011 10:32:38.968992 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:39.478356 master-2 kubenswrapper[4776]: I1011 10:32:39.478287 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:32:39.770562 master-2 kubenswrapper[4776]: I1011 10:32:39.770467 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:39.782887 master-2 kubenswrapper[4776]: I1011 10:32:39.782401 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kubelet-dir\") pod \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " Oct 11 10:32:39.782887 master-2 kubenswrapper[4776]: I1011 10:32:39.782590 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-var-lock\") pod \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " Oct 11 10:32:39.782887 master-2 kubenswrapper[4776]: I1011 10:32:39.782808 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "860e97e2-10a4-4a16-ac4e-4a0fc7490200" (UID: "860e97e2-10a4-4a16-ac4e-4a0fc7490200"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:32:39.782887 master-2 kubenswrapper[4776]: I1011 10:32:39.782890 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-var-lock" (OuterVolumeSpecName: "var-lock") pod "860e97e2-10a4-4a16-ac4e-4a0fc7490200" (UID: "860e97e2-10a4-4a16-ac4e-4a0fc7490200"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:32:39.783830 master-2 kubenswrapper[4776]: I1011 10:32:39.783426 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kube-api-access\") pod \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\" (UID: \"860e97e2-10a4-4a16-ac4e-4a0fc7490200\") " Oct 11 10:32:39.784508 master-2 kubenswrapper[4776]: I1011 10:32:39.784439 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:39.784508 master-2 kubenswrapper[4776]: I1011 10:32:39.784473 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/860e97e2-10a4-4a16-ac4e-4a0fc7490200-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:39.785469 master-2 kubenswrapper[4776]: I1011 10:32:39.785400 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "860e97e2-10a4-4a16-ac4e-4a0fc7490200" (UID: "860e97e2-10a4-4a16-ac4e-4a0fc7490200"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:32:39.885819 master-2 kubenswrapper[4776]: I1011 10:32:39.885735 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/860e97e2-10a4-4a16-ac4e-4a0fc7490200-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:39.970398 master-2 kubenswrapper[4776]: I1011 10:32:39.970300 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:39.970398 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:39.970398 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:39.970398 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:39.970398 master-2 kubenswrapper[4776]: I1011 10:32:39.970382 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:40.485459 master-2 kubenswrapper[4776]: I1011 10:32:40.485414 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-2" event={"ID":"860e97e2-10a4-4a16-ac4e-4a0fc7490200","Type":"ContainerDied","Data":"2a0c267eb08670f3fe6902b94f01392021a837de4570d7e530b15613d10477eb"} Oct 11 10:32:40.485459 master-2 kubenswrapper[4776]: I1011 10:32:40.485462 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a0c267eb08670f3fe6902b94f01392021a837de4570d7e530b15613d10477eb" Oct 11 10:32:40.485459 master-2 kubenswrapper[4776]: I1011 10:32:40.485427 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-2" Oct 11 10:32:40.969645 master-2 kubenswrapper[4776]: I1011 10:32:40.969586 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:40.969645 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:40.969645 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:40.969645 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:40.969928 master-2 kubenswrapper[4776]: I1011 10:32:40.969646 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:41.661073 master-2 kubenswrapper[4776]: E1011 10:32:41.661009 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" podUID="cacd2d60-e8a5-450f-a4ad-dfc0194e3325" Oct 11 10:32:41.707723 master-2 kubenswrapper[4776]: I1011 10:32:41.707664 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:32:41.707904 master-2 kubenswrapper[4776]: E1011 10:32:41.707777 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:32:41.707904 master-2 kubenswrapper[4776]: E1011 10:32:41.707855 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:34:43.707832802 +0000 UTC m=+518.492259511 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:32:41.970327 master-2 kubenswrapper[4776]: I1011 10:32:41.970171 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:41.970327 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:41.970327 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:41.970327 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:41.970327 master-2 kubenswrapper[4776]: I1011 10:32:41.970236 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:42.496981 master-2 kubenswrapper[4776]: I1011 10:32:42.496939 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:32:42.925019 master-2 kubenswrapper[4776]: I1011 10:32:42.924968 4776 patch_prober.go:28] interesting pod/apiserver-555f658fd6-wmcqt container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.41:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.41:8443: connect: connection refused" start-of-body= Oct 11 10:32:42.925463 master-2 kubenswrapper[4776]: I1011 10:32:42.925022 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.41:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.41:8443: connect: connection refused" Oct 11 10:32:42.969255 master-2 kubenswrapper[4776]: I1011 10:32:42.969209 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:42.969255 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:42.969255 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:42.969255 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:42.969514 master-2 kubenswrapper[4776]: I1011 10:32:42.969273 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:42.984087 master-2 kubenswrapper[4776]: I1011 10:32:42.983521 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-2"] Oct 11 10:32:42.984087 master-2 kubenswrapper[4776]: E1011 10:32:42.983735 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860e97e2-10a4-4a16-ac4e-4a0fc7490200" containerName="installer" Oct 11 10:32:42.984087 master-2 kubenswrapper[4776]: I1011 10:32:42.983746 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="860e97e2-10a4-4a16-ac4e-4a0fc7490200" containerName="installer" Oct 11 10:32:42.984087 master-2 kubenswrapper[4776]: I1011 10:32:42.983844 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="860e97e2-10a4-4a16-ac4e-4a0fc7490200" containerName="installer" Oct 11 10:32:42.984476 master-2 kubenswrapper[4776]: I1011 10:32:42.984228 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:42.988302 master-2 kubenswrapper[4776]: I1011 10:32:42.988265 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 11 10:32:42.998173 master-2 kubenswrapper[4776]: I1011 10:32:42.998144 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-2"] Oct 11 10:32:43.026337 master-2 kubenswrapper[4776]: I1011 10:32:43.026285 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-var-lock\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.026508 master-2 kubenswrapper[4776]: I1011 10:32:43.026477 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-kubelet-dir\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.026555 master-2 kubenswrapper[4776]: I1011 10:32:43.026529 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d02073-00a3-41a2-8ca4-6932819886b8-kube-api-access\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.127600 master-2 kubenswrapper[4776]: I1011 10:32:43.127498 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-kubelet-dir\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.127600 master-2 kubenswrapper[4776]: I1011 10:32:43.127538 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d02073-00a3-41a2-8ca4-6932819886b8-kube-api-access\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.127600 master-2 kubenswrapper[4776]: I1011 10:32:43.127573 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-var-lock\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.127938 master-2 kubenswrapper[4776]: I1011 10:32:43.127640 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-kubelet-dir\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.127938 master-2 kubenswrapper[4776]: I1011 10:32:43.127751 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-var-lock\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.148290 master-2 kubenswrapper[4776]: I1011 10:32:43.148216 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d02073-00a3-41a2-8ca4-6932819886b8-kube-api-access\") pod \"installer-2-master-2\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.302466 master-2 kubenswrapper[4776]: I1011 10:32:43.302417 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:32:43.505239 master-2 kubenswrapper[4776]: I1011 10:32:43.505188 4776 generic.go:334] "Generic (PLEG): container finished" podID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerID="a486fb47915dcf562b4049ef498fba0a79dc2f0d9c2b35c61e3a77be9dcdeae7" exitCode=0 Oct 11 10:32:43.505239 master-2 kubenswrapper[4776]: I1011 10:32:43.505231 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerDied","Data":"a486fb47915dcf562b4049ef498fba0a79dc2f0d9c2b35c61e3a77be9dcdeae7"} Oct 11 10:32:43.970112 master-2 kubenswrapper[4776]: I1011 10:32:43.970028 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:43.970112 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:43.970112 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:43.970112 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:43.970668 master-2 kubenswrapper[4776]: I1011 10:32:43.970156 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:44.649747 master-2 kubenswrapper[4776]: I1011 10:32:44.649503 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:32:44.649953 master-2 kubenswrapper[4776]: E1011 10:32:44.649831 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:32:44.649953 master-2 kubenswrapper[4776]: E1011 10:32:44.649929 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:34:46.649905515 +0000 UTC m=+521.434332314 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:32:44.989899 master-2 kubenswrapper[4776]: I1011 10:32:44.989794 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:44.989899 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:44.989899 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:44.989899 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:44.989899 master-2 kubenswrapper[4776]: I1011 10:32:44.989851 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:45.129128 master-2 kubenswrapper[4776]: I1011 10:32:45.129072 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154810 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-encryption-config\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154882 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-serving-ca\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154909 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-trusted-ca-bundle\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154924 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit-dir\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154943 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-serving-cert\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154970 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-config\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.154997 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc5gq\" (UniqueName: \"kubernetes.io/projected/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-kube-api-access-wc5gq\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.155017 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-image-import-ca\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.155032 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-client\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.155076 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-node-pullsecrets\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.156061 master-2 kubenswrapper[4776]: I1011 10:32:45.155109 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit\") pod \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\" (UID: \"7a89cb41-fb97-4282-a12d-c6f8d87bc41e\") " Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156141 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156180 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-config" (OuterVolumeSpecName: "config") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156222 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156375 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156609 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156635 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:45.157723 master-2 kubenswrapper[4776]: I1011 10:32:45.156819 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit" (OuterVolumeSpecName: "audit") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:32:45.159086 master-2 kubenswrapper[4776]: I1011 10:32:45.159037 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:32:45.159319 master-2 kubenswrapper[4776]: I1011 10:32:45.159238 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-kube-api-access-wc5gq" (OuterVolumeSpecName: "kube-api-access-wc5gq") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "kube-api-access-wc5gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:32:45.160362 master-2 kubenswrapper[4776]: I1011 10:32:45.160309 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:32:45.161066 master-2 kubenswrapper[4776]: I1011 10:32:45.161015 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "7a89cb41-fb97-4282-a12d-c6f8d87bc41e" (UID: "7a89cb41-fb97-4282-a12d-c6f8d87bc41e"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:32:45.172385 master-2 kubenswrapper[4776]: I1011 10:32:45.172286 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-729nm"] Oct 11 10:32:45.172623 master-2 kubenswrapper[4776]: E1011 10:32:45.172599 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver-check-endpoints" Oct 11 10:32:45.172623 master-2 kubenswrapper[4776]: I1011 10:32:45.172623 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver-check-endpoints" Oct 11 10:32:45.172776 master-2 kubenswrapper[4776]: E1011 10:32:45.172640 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" Oct 11 10:32:45.172776 master-2 kubenswrapper[4776]: I1011 10:32:45.172649 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" Oct 11 10:32:45.172776 master-2 kubenswrapper[4776]: E1011 10:32:45.172662 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="fix-audit-permissions" Oct 11 10:32:45.172776 master-2 kubenswrapper[4776]: I1011 10:32:45.172670 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="fix-audit-permissions" Oct 11 10:32:45.172973 master-2 kubenswrapper[4776]: I1011 10:32:45.172785 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver-check-endpoints" Oct 11 10:32:45.172973 master-2 kubenswrapper[4776]: I1011 10:32:45.172814 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" containerName="openshift-apiserver" Oct 11 10:32:45.173800 master-2 kubenswrapper[4776]: I1011 10:32:45.173603 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.189201 master-2 kubenswrapper[4776]: I1011 10:32:45.189159 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-729nm"] Oct 11 10:32:45.256715 master-2 kubenswrapper[4776]: I1011 10:32:45.256634 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-audit\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.256904 master-2 kubenswrapper[4776]: I1011 10:32:45.256879 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-serving-cert\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.256904 master-2 kubenswrapper[4776]: I1011 10:32:45.256929 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-trusted-ca-bundle\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257154 master-2 kubenswrapper[4776]: I1011 10:32:45.256961 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-node-pullsecrets\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257154 master-2 kubenswrapper[4776]: I1011 10:32:45.256981 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-etcd-serving-ca\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257154 master-2 kubenswrapper[4776]: I1011 10:32:45.256998 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-encryption-config\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257291 master-2 kubenswrapper[4776]: I1011 10:32:45.257169 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-audit-dir\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257380 master-2 kubenswrapper[4776]: I1011 10:32:45.257349 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-etcd-client\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257432 master-2 kubenswrapper[4776]: I1011 10:32:45.257385 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-image-import-ca\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257574 master-2 kubenswrapper[4776]: I1011 10:32:45.257548 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-config\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257665 master-2 kubenswrapper[4776]: I1011 10:32:45.257616 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgjs\" (UniqueName: \"kubernetes.io/projected/3a156e42-88da-4ce6-9995-6865609e2711-kube-api-access-9qgjs\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.257819 master-2 kubenswrapper[4776]: I1011 10:32:45.257793 4776 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257819 master-2 kubenswrapper[4776]: I1011 10:32:45.257814 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257828 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257842 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257854 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257865 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257879 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257891 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc5gq\" (UniqueName: \"kubernetes.io/projected/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-kube-api-access-wc5gq\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.257907 master-2 kubenswrapper[4776]: I1011 10:32:45.257904 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.258135 master-2 kubenswrapper[4776]: I1011 10:32:45.257916 4776 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-image-import-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.258135 master-2 kubenswrapper[4776]: I1011 10:32:45.257929 4776 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a89cb41-fb97-4282-a12d-c6f8d87bc41e-node-pullsecrets\") on node \"master-2\" DevicePath \"\"" Oct 11 10:32:45.279240 master-2 kubenswrapper[4776]: I1011 10:32:45.279199 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-2"] Oct 11 10:32:45.287308 master-2 kubenswrapper[4776]: W1011 10:32:45.287263 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf WatchSource:0}: Error finding container d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf: Status 404 returned error can't find the container with id d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf Oct 11 10:32:45.359589 master-2 kubenswrapper[4776]: I1011 10:32:45.359523 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-node-pullsecrets\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359781 master-2 kubenswrapper[4776]: I1011 10:32:45.359632 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-etcd-serving-ca\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359781 master-2 kubenswrapper[4776]: I1011 10:32:45.359659 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-encryption-config\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359781 master-2 kubenswrapper[4776]: I1011 10:32:45.359539 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-node-pullsecrets\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359781 master-2 kubenswrapper[4776]: I1011 10:32:45.359710 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-audit-dir\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359781 master-2 kubenswrapper[4776]: I1011 10:32:45.359761 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-image-import-ca\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359781 master-2 kubenswrapper[4776]: I1011 10:32:45.359773 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-audit-dir\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359998 master-2 kubenswrapper[4776]: I1011 10:32:45.359785 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-etcd-client\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359998 master-2 kubenswrapper[4776]: I1011 10:32:45.359860 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-config\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359998 master-2 kubenswrapper[4776]: I1011 10:32:45.359896 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgjs\" (UniqueName: \"kubernetes.io/projected/3a156e42-88da-4ce6-9995-6865609e2711-kube-api-access-9qgjs\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359998 master-2 kubenswrapper[4776]: I1011 10:32:45.359932 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-audit\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359998 master-2 kubenswrapper[4776]: I1011 10:32:45.359954 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-serving-cert\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.359998 master-2 kubenswrapper[4776]: I1011 10:32:45.360000 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-trusted-ca-bundle\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.360937 master-2 kubenswrapper[4776]: I1011 10:32:45.360883 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-image-import-ca\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.361006 master-2 kubenswrapper[4776]: I1011 10:32:45.360970 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-audit\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.361456 master-2 kubenswrapper[4776]: I1011 10:32:45.361428 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-config\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.361576 master-2 kubenswrapper[4776]: I1011 10:32:45.361544 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-trusted-ca-bundle\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.362003 master-2 kubenswrapper[4776]: I1011 10:32:45.361976 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-etcd-serving-ca\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.362844 master-2 kubenswrapper[4776]: I1011 10:32:45.362816 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-serving-cert\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.363554 master-2 kubenswrapper[4776]: I1011 10:32:45.363521 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-etcd-client\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.364809 master-2 kubenswrapper[4776]: I1011 10:32:45.364771 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-encryption-config\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.384526 master-2 kubenswrapper[4776]: I1011 10:32:45.384486 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgjs\" (UniqueName: \"kubernetes.io/projected/3a156e42-88da-4ce6-9995-6865609e2711-kube-api-access-9qgjs\") pod \"apiserver-777cc846dc-729nm\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.490166 master-2 kubenswrapper[4776]: I1011 10:32:45.489746 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:45.515335 master-2 kubenswrapper[4776]: I1011 10:32:45.515085 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-2" event={"ID":"d7d02073-00a3-41a2-8ca4-6932819886b8","Type":"ContainerStarted","Data":"2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123"} Oct 11 10:32:45.515335 master-2 kubenswrapper[4776]: I1011 10:32:45.515129 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-2" event={"ID":"d7d02073-00a3-41a2-8ca4-6932819886b8","Type":"ContainerStarted","Data":"d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf"} Oct 11 10:32:45.516472 master-2 kubenswrapper[4776]: I1011 10:32:45.516423 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"4f88b73b0d121e855641834122063be9","Type":"ContainerStarted","Data":"10893b6ff26cfe0860be9681aea4e014407f210edeb0807d7d50c1f9edb2d910"} Oct 11 10:32:45.518395 master-2 kubenswrapper[4776]: I1011 10:32:45.518365 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" event={"ID":"7a89cb41-fb97-4282-a12d-c6f8d87bc41e","Type":"ContainerDied","Data":"632a135875099c1d39a46b5212f4753eda648d4f1ce35df8cc0f167cab38ce86"} Oct 11 10:32:45.518471 master-2 kubenswrapper[4776]: I1011 10:32:45.518417 4776 scope.go:117] "RemoveContainer" containerID="2fdacc499227869c48e6c020ccf86a1927bcc28943d27adf9666ea1d0e17f652" Oct 11 10:32:45.518471 master-2 kubenswrapper[4776]: I1011 10:32:45.518438 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555f658fd6-wmcqt" Oct 11 10:32:45.531252 master-2 kubenswrapper[4776]: I1011 10:32:45.530586 4776 scope.go:117] "RemoveContainer" containerID="a486fb47915dcf562b4049ef498fba0a79dc2f0d9c2b35c61e3a77be9dcdeae7" Oct 11 10:32:45.541754 master-2 kubenswrapper[4776]: I1011 10:32:45.541591 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-2" podStartSLOduration=3.541570139 podStartE2EDuration="3.541570139s" podCreationTimestamp="2025-10-11 10:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:45.535756976 +0000 UTC m=+400.320183685" watchObservedRunningTime="2025-10-11 10:32:45.541570139 +0000 UTC m=+400.325996848" Oct 11 10:32:45.544037 master-2 kubenswrapper[4776]: I1011 10:32:45.544002 4776 scope.go:117] "RemoveContainer" containerID="b832fb464d44d9bbecf0e8282e7db004cc8bdd8588f8fbb153766321b64a0e01" Oct 11 10:32:45.561645 master-2 kubenswrapper[4776]: I1011 10:32:45.561595 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-wmcqt"] Oct 11 10:32:45.565514 master-2 kubenswrapper[4776]: I1011 10:32:45.565463 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-555f658fd6-wmcqt"] Oct 11 10:32:45.968411 master-2 kubenswrapper[4776]: I1011 10:32:45.968364 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:45.968411 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:45.968411 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:45.968411 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:45.968750 master-2 kubenswrapper[4776]: I1011 10:32:45.968414 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:46.073288 master-2 kubenswrapper[4776]: I1011 10:32:46.073193 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a89cb41-fb97-4282-a12d-c6f8d87bc41e" path="/var/lib/kubelet/pods/7a89cb41-fb97-4282-a12d-c6f8d87bc41e/volumes" Oct 11 10:32:46.138619 master-2 kubenswrapper[4776]: I1011 10:32:46.138576 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-2"] Oct 11 10:32:46.139178 master-2 kubenswrapper[4776]: I1011 10:32:46.139156 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:46.142832 master-2 kubenswrapper[4776]: I1011 10:32:46.142798 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 10:32:46.142915 master-2 kubenswrapper[4776]: I1011 10:32:46.142836 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"openshift-service-ca.crt" Oct 11 10:32:46.154482 master-2 kubenswrapper[4776]: I1011 10:32:46.150717 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-2"] Oct 11 10:32:46.159046 master-2 kubenswrapper[4776]: I1011 10:32:46.158513 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-729nm"] Oct 11 10:32:46.169808 master-2 kubenswrapper[4776]: I1011 10:32:46.169761 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nrfh\" (UniqueName: \"kubernetes.io/projected/a5e255b2-14b3-42ed-9396-f96c40e231c0-kube-api-access-2nrfh\") pod \"kube-controller-manager-guard-master-2\" (UID: \"a5e255b2-14b3-42ed-9396-f96c40e231c0\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:46.257274 master-2 kubenswrapper[4776]: W1011 10:32:46.257235 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a156e42_88da_4ce6_9995_6865609e2711.slice/crio-ee25a7d80d4e6c2818c35f57b04d647f323d2db779f406477d1f90d328d3f868 WatchSource:0}: Error finding container ee25a7d80d4e6c2818c35f57b04d647f323d2db779f406477d1f90d328d3f868: Status 404 returned error can't find the container with id ee25a7d80d4e6c2818c35f57b04d647f323d2db779f406477d1f90d328d3f868 Oct 11 10:32:46.271050 master-2 kubenswrapper[4776]: I1011 10:32:46.271008 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nrfh\" (UniqueName: \"kubernetes.io/projected/a5e255b2-14b3-42ed-9396-f96c40e231c0-kube-api-access-2nrfh\") pod \"kube-controller-manager-guard-master-2\" (UID: \"a5e255b2-14b3-42ed-9396-f96c40e231c0\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:46.290817 master-2 kubenswrapper[4776]: I1011 10:32:46.290787 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nrfh\" (UniqueName: \"kubernetes.io/projected/a5e255b2-14b3-42ed-9396-f96c40e231c0-kube-api-access-2nrfh\") pod \"kube-controller-manager-guard-master-2\" (UID: \"a5e255b2-14b3-42ed-9396-f96c40e231c0\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:46.463133 master-2 kubenswrapper[4776]: I1011 10:32:46.463070 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:46.533034 master-2 kubenswrapper[4776]: I1011 10:32:46.531737 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a156e42-88da-4ce6-9995-6865609e2711" containerID="e726f4cf3755426805ed7e9bd7973871407e8c8b66372a8c807859b61c3bd2f3" exitCode=0 Oct 11 10:32:46.533034 master-2 kubenswrapper[4776]: I1011 10:32:46.531789 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerDied","Data":"e726f4cf3755426805ed7e9bd7973871407e8c8b66372a8c807859b61c3bd2f3"} Oct 11 10:32:46.533034 master-2 kubenswrapper[4776]: I1011 10:32:46.531834 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerStarted","Data":"ee25a7d80d4e6c2818c35f57b04d647f323d2db779f406477d1f90d328d3f868"} Oct 11 10:32:46.877720 master-2 kubenswrapper[4776]: I1011 10:32:46.877469 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-2"] Oct 11 10:32:46.969202 master-2 kubenswrapper[4776]: I1011 10:32:46.968827 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:46.969202 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:46.969202 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:46.969202 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:46.969202 master-2 kubenswrapper[4776]: I1011 10:32:46.968880 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:47.345585 master-2 kubenswrapper[4776]: W1011 10:32:47.345344 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5e255b2_14b3_42ed_9396_f96c40e231c0.slice/crio-4872f242a302dc42e4940d88bf78d89ad7ca848f809cde970d865e2018784c48 WatchSource:0}: Error finding container 4872f242a302dc42e4940d88bf78d89ad7ca848f809cde970d865e2018784c48: Status 404 returned error can't find the container with id 4872f242a302dc42e4940d88bf78d89ad7ca848f809cde970d865e2018784c48 Oct 11 10:32:47.546402 master-2 kubenswrapper[4776]: I1011 10:32:47.546327 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" event={"ID":"a5e255b2-14b3-42ed-9396-f96c40e231c0","Type":"ContainerStarted","Data":"4872f242a302dc42e4940d88bf78d89ad7ca848f809cde970d865e2018784c48"} Oct 11 10:32:47.548636 master-2 kubenswrapper[4776]: I1011 10:32:47.548566 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerStarted","Data":"90d6acbfbe353ba98c33d9d9275a952ddeabac687eed9e519947f935a2f44edf"} Oct 11 10:32:47.786749 master-2 kubenswrapper[4776]: I1011 10:32:47.786660 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:32:47.969370 master-2 kubenswrapper[4776]: I1011 10:32:47.969274 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:47.969370 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:47.969370 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:47.969370 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:47.969370 master-2 kubenswrapper[4776]: I1011 10:32:47.969336 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:48.555322 master-2 kubenswrapper[4776]: I1011 10:32:48.555265 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"4f88b73b0d121e855641834122063be9","Type":"ContainerStarted","Data":"1c66c420f42b218cae752c5f11d7d84132ff9087b3b755852c6f533f1acaeece"} Oct 11 10:32:48.555322 master-2 kubenswrapper[4776]: I1011 10:32:48.555309 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"4f88b73b0d121e855641834122063be9","Type":"ContainerStarted","Data":"0f23b40fdead0283107ab65c161704aa7acddce7d9fe04f98e8ea9ab7f03a4cd"} Oct 11 10:32:48.555322 master-2 kubenswrapper[4776]: I1011 10:32:48.555319 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"4f88b73b0d121e855641834122063be9","Type":"ContainerStarted","Data":"f3c0c4b66c129c923e0c2ad907f82ab3a83141f8e7f3805af522d3e693204962"} Oct 11 10:32:48.556966 master-2 kubenswrapper[4776]: I1011 10:32:48.556947 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" event={"ID":"a5e255b2-14b3-42ed-9396-f96c40e231c0","Type":"ContainerStarted","Data":"8ec52bd758291544a17b9fe1ed1360f8ff65d06f8f59fe21646afc6be8c9e794"} Oct 11 10:32:48.557309 master-2 kubenswrapper[4776]: I1011 10:32:48.557257 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:48.559129 master-2 kubenswrapper[4776]: I1011 10:32:48.559093 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerStarted","Data":"e2dd8c36e185fe9780ea6d5b908ce2843fc734e8fa6bcfa6808d36a4d7c261b0"} Oct 11 10:32:48.563352 master-2 kubenswrapper[4776]: I1011 10:32:48.563318 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:32:48.585334 master-2 kubenswrapper[4776]: I1011 10:32:48.585219 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podStartSLOduration=11.585197576 podStartE2EDuration="11.585197576s" podCreationTimestamp="2025-10-11 10:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:48.582855424 +0000 UTC m=+403.367282173" watchObservedRunningTime="2025-10-11 10:32:48.585197576 +0000 UTC m=+403.369624325" Oct 11 10:32:48.603280 master-2 kubenswrapper[4776]: I1011 10:32:48.603193 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podStartSLOduration=2.6031722 podStartE2EDuration="2.6031722s" podCreationTimestamp="2025-10-11 10:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:48.601722162 +0000 UTC m=+403.386148871" watchObservedRunningTime="2025-10-11 10:32:48.6031722 +0000 UTC m=+403.387598949" Oct 11 10:32:48.631726 master-2 kubenswrapper[4776]: I1011 10:32:48.629447 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podStartSLOduration=57.629432254 podStartE2EDuration="57.629432254s" podCreationTimestamp="2025-10-11 10:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:32:48.626747893 +0000 UTC m=+403.411174602" watchObservedRunningTime="2025-10-11 10:32:48.629432254 +0000 UTC m=+403.413858963" Oct 11 10:32:48.969046 master-2 kubenswrapper[4776]: I1011 10:32:48.968949 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:48.969046 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:48.969046 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:48.969046 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:48.969046 master-2 kubenswrapper[4776]: I1011 10:32:48.969006 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:49.567906 master-2 kubenswrapper[4776]: I1011 10:32:49.567855 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/1.log" Oct 11 10:32:49.569016 master-2 kubenswrapper[4776]: I1011 10:32:49.568793 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/0.log" Oct 11 10:32:49.569016 master-2 kubenswrapper[4776]: I1011 10:32:49.568838 4776 generic.go:334] "Generic (PLEG): container finished" podID="6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c" containerID="9d1e12fc2f0f72ed70b132b8b498f2531a6ae8ae01d1a1a52b35463b0839dedd" exitCode=1 Oct 11 10:32:49.569484 master-2 kubenswrapper[4776]: I1011 10:32:49.569417 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerDied","Data":"9d1e12fc2f0f72ed70b132b8b498f2531a6ae8ae01d1a1a52b35463b0839dedd"} Oct 11 10:32:49.569803 master-2 kubenswrapper[4776]: I1011 10:32:49.569764 4776 scope.go:117] "RemoveContainer" containerID="2eff4353493e1e27d8a85bd8e32e0408e179cf5370df38de2ded9a10d5e6c314" Oct 11 10:32:49.570660 master-2 kubenswrapper[4776]: I1011 10:32:49.570622 4776 scope.go:117] "RemoveContainer" containerID="9d1e12fc2f0f72ed70b132b8b498f2531a6ae8ae01d1a1a52b35463b0839dedd" Oct 11 10:32:49.572235 master-2 kubenswrapper[4776]: E1011 10:32:49.571321 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-766ddf4575-wf7mj_openshift-ingress-operator(6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c)\"" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" podUID="6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c" Oct 11 10:32:49.970161 master-2 kubenswrapper[4776]: I1011 10:32:49.970103 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:49.970161 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:49.970161 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:49.970161 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:49.970677 master-2 kubenswrapper[4776]: I1011 10:32:49.970623 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:50.489923 master-2 kubenswrapper[4776]: I1011 10:32:50.489862 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:50.490149 master-2 kubenswrapper[4776]: I1011 10:32:50.490037 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:50.497574 master-2 kubenswrapper[4776]: I1011 10:32:50.497535 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:50.584161 master-2 kubenswrapper[4776]: I1011 10:32:50.584108 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/1.log" Oct 11 10:32:50.591859 master-2 kubenswrapper[4776]: I1011 10:32:50.591787 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:32:50.969571 master-2 kubenswrapper[4776]: I1011 10:32:50.969537 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:50.969571 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:50.969571 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:50.969571 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:50.969897 master-2 kubenswrapper[4776]: I1011 10:32:50.969875 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:51.829088 master-2 kubenswrapper[4776]: I1011 10:32:51.829033 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-729nm"] Oct 11 10:32:51.969163 master-2 kubenswrapper[4776]: I1011 10:32:51.969109 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:51.969163 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:51.969163 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:51.969163 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:51.969457 master-2 kubenswrapper[4776]: I1011 10:32:51.969180 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:52.944492 master-2 kubenswrapper[4776]: I1011 10:32:52.944397 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-2"] Oct 11 10:32:52.969761 master-2 kubenswrapper[4776]: I1011 10:32:52.969722 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:52.969761 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:52.969761 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:52.969761 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:52.970077 master-2 kubenswrapper[4776]: I1011 10:32:52.969771 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:53.603820 master-2 kubenswrapper[4776]: I1011 10:32:53.603715 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" containerID="cri-o://90d6acbfbe353ba98c33d9d9275a952ddeabac687eed9e519947f935a2f44edf" gracePeriod=120 Oct 11 10:32:53.603820 master-2 kubenswrapper[4776]: I1011 10:32:53.603772 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://e2dd8c36e185fe9780ea6d5b908ce2843fc734e8fa6bcfa6808d36a4d7c261b0" gracePeriod=120 Oct 11 10:32:53.971386 master-2 kubenswrapper[4776]: I1011 10:32:53.971331 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:53.971386 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:53.971386 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:53.971386 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:53.972317 master-2 kubenswrapper[4776]: I1011 10:32:53.972288 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:54.612136 master-2 kubenswrapper[4776]: I1011 10:32:54.612034 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a156e42-88da-4ce6-9995-6865609e2711" containerID="e2dd8c36e185fe9780ea6d5b908ce2843fc734e8fa6bcfa6808d36a4d7c261b0" exitCode=0 Oct 11 10:32:54.612136 master-2 kubenswrapper[4776]: I1011 10:32:54.612078 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerDied","Data":"e2dd8c36e185fe9780ea6d5b908ce2843fc734e8fa6bcfa6808d36a4d7c261b0"} Oct 11 10:32:54.969057 master-2 kubenswrapper[4776]: I1011 10:32:54.969002 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:54.969057 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:54.969057 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:54.969057 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:54.969057 master-2 kubenswrapper[4776]: I1011 10:32:54.969057 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: I1011 10:32:55.495728 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:32:55.495823 master-2 kubenswrapper[4776]: I1011 10:32:55.495812 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:55.969395 master-2 kubenswrapper[4776]: I1011 10:32:55.969352 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:55.969395 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:55.969395 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:55.969395 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:55.969794 master-2 kubenswrapper[4776]: I1011 10:32:55.969765 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:56.969449 master-2 kubenswrapper[4776]: I1011 10:32:56.969329 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:56.969449 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:56.969449 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:56.969449 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:56.970432 master-2 kubenswrapper[4776]: I1011 10:32:56.969455 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:57.943286 master-2 kubenswrapper[4776]: I1011 10:32:57.943166 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:57.943286 master-2 kubenswrapper[4776]: I1011 10:32:57.943264 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:57.943613 master-2 kubenswrapper[4776]: I1011 10:32:57.943302 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:57.943613 master-2 kubenswrapper[4776]: I1011 10:32:57.943335 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:57.950707 master-2 kubenswrapper[4776]: I1011 10:32:57.950616 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:57.951103 master-2 kubenswrapper[4776]: I1011 10:32:57.951060 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:57.969697 master-2 kubenswrapper[4776]: I1011 10:32:57.969626 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:57.969697 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:57.969697 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:57.969697 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:57.970473 master-2 kubenswrapper[4776]: I1011 10:32:57.969707 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:58.647631 master-2 kubenswrapper[4776]: I1011 10:32:58.644991 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:58.647631 master-2 kubenswrapper[4776]: I1011 10:32:58.646557 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:32:58.968881 master-2 kubenswrapper[4776]: I1011 10:32:58.968794 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:58.968881 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:58.968881 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:58.968881 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:58.969176 master-2 kubenswrapper[4776]: I1011 10:32:58.968933 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:32:59.969927 master-2 kubenswrapper[4776]: I1011 10:32:59.969861 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:32:59.969927 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:32:59.969927 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:32:59.969927 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:32:59.970642 master-2 kubenswrapper[4776]: I1011 10:32:59.969952 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: I1011 10:33:00.495732 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:00.495827 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:00.497518 master-2 kubenswrapper[4776]: I1011 10:33:00.495826 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:00.970399 master-2 kubenswrapper[4776]: I1011 10:33:00.970300 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:00.970399 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:00.970399 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:00.970399 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:00.970399 master-2 kubenswrapper[4776]: I1011 10:33:00.970387 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:01.781791 master-2 kubenswrapper[4776]: I1011 10:33:01.781700 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-2"] Oct 11 10:33:01.782513 master-2 kubenswrapper[4776]: I1011 10:33:01.782462 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.787045 master-2 kubenswrapper[4776]: I1011 10:33:01.786974 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Oct 11 10:33:01.801245 master-2 kubenswrapper[4776]: I1011 10:33:01.801183 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-2"] Oct 11 10:33:01.881613 master-2 kubenswrapper[4776]: I1011 10:33:01.881531 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-var-lock\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.881916 master-2 kubenswrapper[4776]: I1011 10:33:01.881871 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.881979 master-2 kubenswrapper[4776]: I1011 10:33:01.881965 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kube-api-access\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.970080 master-2 kubenswrapper[4776]: I1011 10:33:01.970002 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:01.970080 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:01.970080 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:01.970080 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:01.970412 master-2 kubenswrapper[4776]: I1011 10:33:01.970115 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:01.983245 master-2 kubenswrapper[4776]: I1011 10:33:01.983199 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.983880 master-2 kubenswrapper[4776]: I1011 10:33:01.983267 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kube-api-access\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.983880 master-2 kubenswrapper[4776]: I1011 10:33:01.983321 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-var-lock\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.983880 master-2 kubenswrapper[4776]: I1011 10:33:01.983399 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:01.983880 master-2 kubenswrapper[4776]: I1011 10:33:01.983414 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-var-lock\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:02.016582 master-2 kubenswrapper[4776]: I1011 10:33:02.016504 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kube-api-access\") pod \"installer-5-master-2\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:02.110803 master-2 kubenswrapper[4776]: I1011 10:33:02.110625 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:02.547046 master-2 kubenswrapper[4776]: I1011 10:33:02.546989 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-2"] Oct 11 10:33:02.551715 master-2 kubenswrapper[4776]: W1011 10:33:02.551645 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfc3dcbf6_abe1_45ca_992b_4d1c7e419128.slice/crio-b7e2bd31994c8a8b44562a2051be2b5813378c028e80cc6d98912e4ac9acbe2f WatchSource:0}: Error finding container b7e2bd31994c8a8b44562a2051be2b5813378c028e80cc6d98912e4ac9acbe2f: Status 404 returned error can't find the container with id b7e2bd31994c8a8b44562a2051be2b5813378c028e80cc6d98912e4ac9acbe2f Oct 11 10:33:02.663198 master-2 kubenswrapper[4776]: I1011 10:33:02.663130 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-2" event={"ID":"fc3dcbf6-abe1-45ca-992b-4d1c7e419128","Type":"ContainerStarted","Data":"b7e2bd31994c8a8b44562a2051be2b5813378c028e80cc6d98912e4ac9acbe2f"} Oct 11 10:33:02.970100 master-2 kubenswrapper[4776]: I1011 10:33:02.970052 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:02.970100 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:02.970100 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:02.970100 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:02.970468 master-2 kubenswrapper[4776]: I1011 10:33:02.970421 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:03.673329 master-2 kubenswrapper[4776]: I1011 10:33:03.673188 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-2" event={"ID":"fc3dcbf6-abe1-45ca-992b-4d1c7e419128","Type":"ContainerStarted","Data":"1c42b2aad8d56f14bb8f7731268d9b68d7e1d8d9d1d9e5f2c2ae8c23a10aa4cb"} Oct 11 10:33:03.700944 master-2 kubenswrapper[4776]: I1011 10:33:03.700771 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-2" podStartSLOduration=2.700725102 podStartE2EDuration="2.700725102s" podCreationTimestamp="2025-10-11 10:33:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:03.694581749 +0000 UTC m=+418.479008478" watchObservedRunningTime="2025-10-11 10:33:03.700725102 +0000 UTC m=+418.485151851" Oct 11 10:33:03.969902 master-2 kubenswrapper[4776]: I1011 10:33:03.969728 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:03.969902 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:03.969902 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:03.969902 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:03.969902 master-2 kubenswrapper[4776]: I1011 10:33:03.969815 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:04.060072 master-2 kubenswrapper[4776]: I1011 10:33:04.060011 4776 scope.go:117] "RemoveContainer" containerID="9d1e12fc2f0f72ed70b132b8b498f2531a6ae8ae01d1a1a52b35463b0839dedd" Oct 11 10:33:04.680370 master-2 kubenswrapper[4776]: I1011 10:33:04.680293 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/1.log" Oct 11 10:33:04.681277 master-2 kubenswrapper[4776]: I1011 10:33:04.680777 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerStarted","Data":"c8328096ff83e7ba5543b3018932260182e624ccc8f4652947efc0b12da9a115"} Oct 11 10:33:04.970384 master-2 kubenswrapper[4776]: I1011 10:33:04.970214 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:04.970384 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:04.970384 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:04.970384 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:04.970384 master-2 kubenswrapper[4776]: I1011 10:33:04.970350 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: I1011 10:33:05.498704 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:05.498787 master-2 kubenswrapper[4776]: I1011 10:33:05.498771 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:05.499843 master-2 kubenswrapper[4776]: I1011 10:33:05.498870 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:33:05.969589 master-2 kubenswrapper[4776]: I1011 10:33:05.969527 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:05.969589 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:05.969589 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:05.969589 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:05.970322 master-2 kubenswrapper[4776]: I1011 10:33:05.969593 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:06.971231 master-2 kubenswrapper[4776]: I1011 10:33:06.971016 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:06.971231 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:06.971231 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:06.971231 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:06.972655 master-2 kubenswrapper[4776]: I1011 10:33:06.972549 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:07.969771 master-2 kubenswrapper[4776]: I1011 10:33:07.969643 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:07.969771 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:07.969771 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:07.969771 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:07.970345 master-2 kubenswrapper[4776]: I1011 10:33:07.970287 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:08.969731 master-2 kubenswrapper[4776]: I1011 10:33:08.969570 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:08.969731 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:08.969731 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:08.969731 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:08.970596 master-2 kubenswrapper[4776]: I1011 10:33:08.969739 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:09.970855 master-2 kubenswrapper[4776]: I1011 10:33:09.970766 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:09.970855 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:09.970855 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:09.970855 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:09.972095 master-2 kubenswrapper[4776]: I1011 10:33:09.970911 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: I1011 10:33:10.498084 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:10.498134 master-2 kubenswrapper[4776]: I1011 10:33:10.498139 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:10.969935 master-2 kubenswrapper[4776]: I1011 10:33:10.969848 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:10.969935 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:10.969935 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:10.969935 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:10.970491 master-2 kubenswrapper[4776]: I1011 10:33:10.969955 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:11.970528 master-2 kubenswrapper[4776]: I1011 10:33:11.970437 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:11.970528 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:11.970528 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:11.970528 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:11.971537 master-2 kubenswrapper[4776]: I1011 10:33:11.970551 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:12.969295 master-2 kubenswrapper[4776]: I1011 10:33:12.969230 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:12.969295 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:12.969295 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:12.969295 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:12.969579 master-2 kubenswrapper[4776]: I1011 10:33:12.969321 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:13.969468 master-2 kubenswrapper[4776]: I1011 10:33:13.969361 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:13.969468 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:13.969468 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:13.969468 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:13.970662 master-2 kubenswrapper[4776]: I1011 10:33:13.969475 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:14.969508 master-2 kubenswrapper[4776]: I1011 10:33:14.969419 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:14.969508 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:14.969508 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:14.969508 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:14.969508 master-2 kubenswrapper[4776]: I1011 10:33:14.969486 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: I1011 10:33:15.497322 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:15.497412 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:15.498152 master-2 kubenswrapper[4776]: I1011 10:33:15.497423 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:15.969668 master-2 kubenswrapper[4776]: I1011 10:33:15.969552 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:15.969668 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:15.969668 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:15.969668 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:15.969668 master-2 kubenswrapper[4776]: I1011 10:33:15.969661 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:16.971556 master-2 kubenswrapper[4776]: I1011 10:33:16.971431 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:16.971556 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:16.971556 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:16.971556 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:16.971556 master-2 kubenswrapper[4776]: I1011 10:33:16.971548 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:17.765482 master-2 kubenswrapper[4776]: I1011 10:33:17.765433 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-v6dfc_8757af56-20fb-439e-adba-7e4e50378936/assisted-installer-controller/0.log" Oct 11 10:33:17.799492 master-2 kubenswrapper[4776]: I1011 10:33:17.799333 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:33:17.970698 master-2 kubenswrapper[4776]: I1011 10:33:17.970575 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:17.970698 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:17.970698 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:17.970698 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:17.970976 master-2 kubenswrapper[4776]: I1011 10:33:17.970731 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:18.970257 master-2 kubenswrapper[4776]: I1011 10:33:18.970201 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:18.970257 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:18.970257 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:18.970257 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:18.971249 master-2 kubenswrapper[4776]: I1011 10:33:18.970829 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:19.970414 master-2 kubenswrapper[4776]: I1011 10:33:19.970352 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:19.970414 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:19.970414 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:19.970414 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:19.971378 master-2 kubenswrapper[4776]: I1011 10:33:19.970424 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: I1011 10:33:20.496146 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:20.496222 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:20.497319 master-2 kubenswrapper[4776]: I1011 10:33:20.496223 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:20.970596 master-2 kubenswrapper[4776]: I1011 10:33:20.970517 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:20.970596 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:20.970596 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:20.970596 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:20.971580 master-2 kubenswrapper[4776]: I1011 10:33:20.970617 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:21.969970 master-2 kubenswrapper[4776]: I1011 10:33:21.969907 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:21.969970 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:21.969970 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:21.969970 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:21.970574 master-2 kubenswrapper[4776]: I1011 10:33:21.970518 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:22.971260 master-2 kubenswrapper[4776]: I1011 10:33:22.971167 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:22.971260 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:22.971260 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:22.971260 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:22.972934 master-2 kubenswrapper[4776]: I1011 10:33:22.971280 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:23.094312 master-2 kubenswrapper[4776]: I1011 10:33:23.094212 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:33:23.098099 master-2 kubenswrapper[4776]: I1011 10:33:23.098031 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.171856 master-2 kubenswrapper[4776]: I1011 10:33:23.171758 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:33:23.229865 master-2 kubenswrapper[4776]: I1011 10:33:23.225904 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-cert-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.229865 master-2 kubenswrapper[4776]: I1011 10:33:23.226016 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-resource-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.229865 master-2 kubenswrapper[4776]: I1011 10:33:23.226038 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-audit-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.327614 master-2 kubenswrapper[4776]: I1011 10:33:23.327520 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-cert-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.327614 master-2 kubenswrapper[4776]: I1011 10:33:23.327625 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-resource-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.328078 master-2 kubenswrapper[4776]: I1011 10:33:23.327656 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-audit-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.328078 master-2 kubenswrapper[4776]: I1011 10:33:23.327803 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-audit-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.328078 master-2 kubenswrapper[4776]: I1011 10:33:23.327805 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-resource-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.328078 master-2 kubenswrapper[4776]: I1011 10:33:23.327843 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-cert-dir\") pod \"kube-apiserver-master-2\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.466806 master-2 kubenswrapper[4776]: I1011 10:33:23.466729 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:23.842329 master-2 kubenswrapper[4776]: I1011 10:33:23.841967 4776 generic.go:334] "Generic (PLEG): container finished" podID="9041570beb5002e8da158e70e12f0c16" containerID="f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1" exitCode=0 Oct 11 10:33:23.843336 master-2 kubenswrapper[4776]: I1011 10:33:23.842655 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerDied","Data":"f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1"} Oct 11 10:33:23.843336 master-2 kubenswrapper[4776]: I1011 10:33:23.842758 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerStarted","Data":"57f45f7af0db732a260dad8f49ae694c1ca688994699767f3768f884b738ad40"} Oct 11 10:33:23.845182 master-2 kubenswrapper[4776]: I1011 10:33:23.845131 4776 generic.go:334] "Generic (PLEG): container finished" podID="d7d02073-00a3-41a2-8ca4-6932819886b8" containerID="2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123" exitCode=0 Oct 11 10:33:23.845182 master-2 kubenswrapper[4776]: I1011 10:33:23.845174 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-2" event={"ID":"d7d02073-00a3-41a2-8ca4-6932819886b8","Type":"ContainerDied","Data":"2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123"} Oct 11 10:33:23.969545 master-2 kubenswrapper[4776]: I1011 10:33:23.969465 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:23.969545 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:23.969545 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:23.969545 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:23.969545 master-2 kubenswrapper[4776]: I1011 10:33:23.969533 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:24.868247 master-2 kubenswrapper[4776]: I1011 10:33:24.868103 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerStarted","Data":"f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9"} Oct 11 10:33:24.868714 master-2 kubenswrapper[4776]: I1011 10:33:24.868263 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerStarted","Data":"e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510"} Oct 11 10:33:24.868714 master-2 kubenswrapper[4776]: I1011 10:33:24.868281 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerStarted","Data":"ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da"} Oct 11 10:33:24.969937 master-2 kubenswrapper[4776]: I1011 10:33:24.969869 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:24.969937 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:24.969937 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:24.969937 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:24.970342 master-2 kubenswrapper[4776]: I1011 10:33:24.969983 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:25.176057 master-2 kubenswrapper[4776]: I1011 10:33:25.176006 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:33:25.256016 master-2 kubenswrapper[4776]: I1011 10:33:25.255948 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d02073-00a3-41a2-8ca4-6932819886b8-kube-api-access\") pod \"d7d02073-00a3-41a2-8ca4-6932819886b8\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " Oct 11 10:33:25.256206 master-2 kubenswrapper[4776]: I1011 10:33:25.256087 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-kubelet-dir\") pod \"d7d02073-00a3-41a2-8ca4-6932819886b8\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " Oct 11 10:33:25.256206 master-2 kubenswrapper[4776]: I1011 10:33:25.256121 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-var-lock\") pod \"d7d02073-00a3-41a2-8ca4-6932819886b8\" (UID: \"d7d02073-00a3-41a2-8ca4-6932819886b8\") " Oct 11 10:33:25.256295 master-2 kubenswrapper[4776]: I1011 10:33:25.256241 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d7d02073-00a3-41a2-8ca4-6932819886b8" (UID: "d7d02073-00a3-41a2-8ca4-6932819886b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:25.256395 master-2 kubenswrapper[4776]: I1011 10:33:25.256346 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-var-lock" (OuterVolumeSpecName: "var-lock") pod "d7d02073-00a3-41a2-8ca4-6932819886b8" (UID: "d7d02073-00a3-41a2-8ca4-6932819886b8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:25.256511 master-2 kubenswrapper[4776]: I1011 10:33:25.256486 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:25.256511 master-2 kubenswrapper[4776]: I1011 10:33:25.256505 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7d02073-00a3-41a2-8ca4-6932819886b8-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:25.258662 master-2 kubenswrapper[4776]: I1011 10:33:25.258630 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7d02073-00a3-41a2-8ca4-6932819886b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7d02073-00a3-41a2-8ca4-6932819886b8" (UID: "d7d02073-00a3-41a2-8ca4-6932819886b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:33:25.359149 master-2 kubenswrapper[4776]: I1011 10:33:25.358982 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7d02073-00a3-41a2-8ca4-6932819886b8-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: I1011 10:33:25.494558 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:25.494618 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:25.495360 master-2 kubenswrapper[4776]: I1011 10:33:25.494630 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:25.876853 master-2 kubenswrapper[4776]: I1011 10:33:25.876793 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerStarted","Data":"784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6"} Oct 11 10:33:25.876853 master-2 kubenswrapper[4776]: I1011 10:33:25.876846 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"9041570beb5002e8da158e70e12f0c16","Type":"ContainerStarted","Data":"b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57"} Oct 11 10:33:25.877926 master-2 kubenswrapper[4776]: I1011 10:33:25.877895 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:25.879218 master-2 kubenswrapper[4776]: I1011 10:33:25.879190 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-2" event={"ID":"d7d02073-00a3-41a2-8ca4-6932819886b8","Type":"ContainerDied","Data":"d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf"} Oct 11 10:33:25.879296 master-2 kubenswrapper[4776]: I1011 10:33:25.879221 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf" Oct 11 10:33:25.879296 master-2 kubenswrapper[4776]: I1011 10:33:25.879256 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-2" Oct 11 10:33:25.902510 master-2 kubenswrapper[4776]: I1011 10:33:25.902433 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-2" podStartSLOduration=2.902414079 podStartE2EDuration="2.902414079s" podCreationTimestamp="2025-10-11 10:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:25.899299177 +0000 UTC m=+440.683725886" watchObservedRunningTime="2025-10-11 10:33:25.902414079 +0000 UTC m=+440.686840788" Oct 11 10:33:25.968522 master-2 kubenswrapper[4776]: I1011 10:33:25.968449 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:25.968522 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:25.968522 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:25.968522 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:25.968972 master-2 kubenswrapper[4776]: I1011 10:33:25.968550 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:26.970118 master-2 kubenswrapper[4776]: I1011 10:33:26.970051 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:26.970118 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:26.970118 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:26.970118 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:26.970623 master-2 kubenswrapper[4776]: I1011 10:33:26.970160 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:27.970432 master-2 kubenswrapper[4776]: I1011 10:33:27.970331 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:27.970432 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:27.970432 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:27.970432 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:27.971281 master-2 kubenswrapper[4776]: I1011 10:33:27.970451 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:27.978645 master-2 kubenswrapper[4776]: I1011 10:33:27.978572 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-2"] Oct 11 10:33:27.978866 master-2 kubenswrapper[4776]: E1011 10:33:27.978808 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d02073-00a3-41a2-8ca4-6932819886b8" containerName="installer" Oct 11 10:33:27.978866 master-2 kubenswrapper[4776]: I1011 10:33:27.978821 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d02073-00a3-41a2-8ca4-6932819886b8" containerName="installer" Oct 11 10:33:27.978982 master-2 kubenswrapper[4776]: I1011 10:33:27.978916 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7d02073-00a3-41a2-8ca4-6932819886b8" containerName="installer" Oct 11 10:33:27.979335 master-2 kubenswrapper[4776]: I1011 10:33:27.979286 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:27.983619 master-2 kubenswrapper[4776]: I1011 10:33:27.983576 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 11 10:33:27.984005 master-2 kubenswrapper[4776]: I1011 10:33:27.983974 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"openshift-service-ca.crt" Oct 11 10:33:27.995987 master-2 kubenswrapper[4776]: I1011 10:33:27.995907 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-2"] Oct 11 10:33:28.143062 master-2 kubenswrapper[4776]: I1011 10:33:28.142975 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv7m6\" (UniqueName: \"kubernetes.io/projected/1a003c5f-2a49-44fb-93a8-7a83319ce8e8-kube-api-access-bv7m6\") pod \"kube-apiserver-guard-master-2\" (UID: \"1a003c5f-2a49-44fb-93a8-7a83319ce8e8\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:28.244534 master-2 kubenswrapper[4776]: I1011 10:33:28.244413 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv7m6\" (UniqueName: \"kubernetes.io/projected/1a003c5f-2a49-44fb-93a8-7a83319ce8e8-kube-api-access-bv7m6\") pod \"kube-apiserver-guard-master-2\" (UID: \"1a003c5f-2a49-44fb-93a8-7a83319ce8e8\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:28.267801 master-2 kubenswrapper[4776]: I1011 10:33:28.267754 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv7m6\" (UniqueName: \"kubernetes.io/projected/1a003c5f-2a49-44fb-93a8-7a83319ce8e8-kube-api-access-bv7m6\") pod \"kube-apiserver-guard-master-2\" (UID: \"1a003c5f-2a49-44fb-93a8-7a83319ce8e8\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:28.301955 master-2 kubenswrapper[4776]: I1011 10:33:28.301898 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:28.467652 master-2 kubenswrapper[4776]: I1011 10:33:28.467611 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:28.467652 master-2 kubenswrapper[4776]: I1011 10:33:28.467658 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:28.478317 master-2 kubenswrapper[4776]: I1011 10:33:28.478274 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:28.763651 master-2 kubenswrapper[4776]: I1011 10:33:28.763585 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-2"] Oct 11 10:33:28.899559 master-2 kubenswrapper[4776]: I1011 10:33:28.899496 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" event={"ID":"1a003c5f-2a49-44fb-93a8-7a83319ce8e8","Type":"ContainerStarted","Data":"e944b9f0fe05de000f493e43201fbe6c63e5bc060919c3b00af98aac25efe17d"} Oct 11 10:33:28.903545 master-2 kubenswrapper[4776]: I1011 10:33:28.903521 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:28.968755 master-2 kubenswrapper[4776]: I1011 10:33:28.968718 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:28.968755 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:28.968755 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:28.968755 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:28.968991 master-2 kubenswrapper[4776]: I1011 10:33:28.968767 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:29.906603 master-2 kubenswrapper[4776]: I1011 10:33:29.906559 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" event={"ID":"1a003c5f-2a49-44fb-93a8-7a83319ce8e8","Type":"ContainerStarted","Data":"beaf5a3da8aca93aa44591bd942154456555dc1572d65396b432139667d779a5"} Oct 11 10:33:29.907202 master-2 kubenswrapper[4776]: I1011 10:33:29.907181 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:29.912056 master-2 kubenswrapper[4776]: I1011 10:33:29.912026 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:33:29.929660 master-2 kubenswrapper[4776]: I1011 10:33:29.929547 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podStartSLOduration=2.929522853 podStartE2EDuration="2.929522853s" podCreationTimestamp="2025-10-11 10:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:29.928801254 +0000 UTC m=+444.713227973" watchObservedRunningTime="2025-10-11 10:33:29.929522853 +0000 UTC m=+444.713949572" Oct 11 10:33:29.969165 master-2 kubenswrapper[4776]: I1011 10:33:29.969109 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:29.969165 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:29.969165 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:29.969165 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:29.969165 master-2 kubenswrapper[4776]: I1011 10:33:29.969161 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: I1011 10:33:30.497911 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:30.497967 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:30.498645 master-2 kubenswrapper[4776]: I1011 10:33:30.497989 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:30.969871 master-2 kubenswrapper[4776]: I1011 10:33:30.969800 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:30.969871 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:30.969871 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:30.969871 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:30.970533 master-2 kubenswrapper[4776]: I1011 10:33:30.969891 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:31.970341 master-2 kubenswrapper[4776]: I1011 10:33:31.970232 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:31.970341 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:31.970341 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:31.970341 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:31.970341 master-2 kubenswrapper[4776]: I1011 10:33:31.970335 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:32.971013 master-2 kubenswrapper[4776]: I1011 10:33:32.970945 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:32.971013 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:32.971013 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:32.971013 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:32.971013 master-2 kubenswrapper[4776]: I1011 10:33:32.971005 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:33.970875 master-2 kubenswrapper[4776]: I1011 10:33:33.970620 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:33.970875 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:33.970875 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:33.970875 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:33.971994 master-2 kubenswrapper[4776]: I1011 10:33:33.970901 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:33.975286 master-2 kubenswrapper[4776]: I1011 10:33:33.975227 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:33:33.976774 master-2 kubenswrapper[4776]: I1011 10:33:33.976740 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.013420 master-2 kubenswrapper[4776]: I1011 10:33:34.013180 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:33:34.118159 master-2 kubenswrapper[4776]: I1011 10:33:34.118112 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-resource-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.118159 master-2 kubenswrapper[4776]: I1011 10:33:34.118156 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-cert-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.219791 master-2 kubenswrapper[4776]: I1011 10:33:34.219757 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-resource-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.219953 master-2 kubenswrapper[4776]: I1011 10:33:34.219868 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-resource-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.220002 master-2 kubenswrapper[4776]: I1011 10:33:34.219932 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-cert-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.220060 master-2 kubenswrapper[4776]: I1011 10:33:34.220048 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-cert-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.311539 master-2 kubenswrapper[4776]: I1011 10:33:34.311414 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:34.936876 master-2 kubenswrapper[4776]: I1011 10:33:34.936823 4776 generic.go:334] "Generic (PLEG): container finished" podID="fc3dcbf6-abe1-45ca-992b-4d1c7e419128" containerID="1c42b2aad8d56f14bb8f7731268d9b68d7e1d8d9d1d9e5f2c2ae8c23a10aa4cb" exitCode=0 Oct 11 10:33:34.937126 master-2 kubenswrapper[4776]: I1011 10:33:34.936907 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-2" event={"ID":"fc3dcbf6-abe1-45ca-992b-4d1c7e419128","Type":"ContainerDied","Data":"1c42b2aad8d56f14bb8f7731268d9b68d7e1d8d9d1d9e5f2c2ae8c23a10aa4cb"} Oct 11 10:33:34.938995 master-2 kubenswrapper[4776]: I1011 10:33:34.938958 4776 generic.go:334] "Generic (PLEG): container finished" podID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerID="c57c60875e1be574e46fe440bf3b2752ffb605bb2f328363af8d1f914310116f" exitCode=0 Oct 11 10:33:34.938995 master-2 kubenswrapper[4776]: I1011 10:33:34.938991 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"f26cf13b1c8c4f1b57c0ac506ef256a4","Type":"ContainerDied","Data":"c57c60875e1be574e46fe440bf3b2752ffb605bb2f328363af8d1f914310116f"} Oct 11 10:33:34.939098 master-2 kubenswrapper[4776]: I1011 10:33:34.939006 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"f26cf13b1c8c4f1b57c0ac506ef256a4","Type":"ContainerStarted","Data":"0ed01188c7283cfff70d6c5cb4504465f9e9f1843a1b8c89bb6c36df04a63ac6"} Oct 11 10:33:34.970633 master-2 kubenswrapper[4776]: I1011 10:33:34.970555 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:34.970633 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:34.970633 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:34.970633 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:34.970901 master-2 kubenswrapper[4776]: I1011 10:33:34.970635 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: I1011 10:33:35.496047 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:35.496237 master-2 kubenswrapper[4776]: I1011 10:33:35.496126 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:35.951473 master-2 kubenswrapper[4776]: I1011 10:33:35.951390 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"f26cf13b1c8c4f1b57c0ac506ef256a4","Type":"ContainerStarted","Data":"3bed6e75ec56e4b27551229ac1cfed015f19cbbd37a51de7899f2a409f7f3107"} Oct 11 10:33:35.951768 master-2 kubenswrapper[4776]: I1011 10:33:35.951482 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"f26cf13b1c8c4f1b57c0ac506ef256a4","Type":"ContainerStarted","Data":"b1b4a56fa0152f300c6a99db97775492cfcdce4712ae78b30e2ac340b25efd8c"} Oct 11 10:33:35.951768 master-2 kubenswrapper[4776]: I1011 10:33:35.951545 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"f26cf13b1c8c4f1b57c0ac506ef256a4","Type":"ContainerStarted","Data":"65701df136730297d07e924a9003107719afae6bc3e70126f7680b788afdcc01"} Oct 11 10:33:35.970129 master-2 kubenswrapper[4776]: I1011 10:33:35.970072 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:35.970129 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:35.970129 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:35.970129 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:35.970489 master-2 kubenswrapper[4776]: I1011 10:33:35.970143 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:35.980090 master-2 kubenswrapper[4776]: I1011 10:33:35.979846 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podStartSLOduration=2.979804798 podStartE2EDuration="2.979804798s" podCreationTimestamp="2025-10-11 10:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:35.973748488 +0000 UTC m=+450.758175247" watchObservedRunningTime="2025-10-11 10:33:35.979804798 +0000 UTC m=+450.764231547" Oct 11 10:33:36.297460 master-2 kubenswrapper[4776]: I1011 10:33:36.297378 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:36.385228 master-2 kubenswrapper[4776]: I1011 10:33:36.384018 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-2"] Oct 11 10:33:36.448931 master-2 kubenswrapper[4776]: I1011 10:33:36.448826 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kube-api-access\") pod \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " Oct 11 10:33:36.449312 master-2 kubenswrapper[4776]: I1011 10:33:36.449144 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kubelet-dir\") pod \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " Oct 11 10:33:36.449312 master-2 kubenswrapper[4776]: I1011 10:33:36.449256 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fc3dcbf6-abe1-45ca-992b-4d1c7e419128" (UID: "fc3dcbf6-abe1-45ca-992b-4d1c7e419128"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:36.449474 master-2 kubenswrapper[4776]: I1011 10:33:36.449324 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-var-lock\") pod \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\" (UID: \"fc3dcbf6-abe1-45ca-992b-4d1c7e419128\") " Oct 11 10:33:36.449474 master-2 kubenswrapper[4776]: I1011 10:33:36.449405 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-var-lock" (OuterVolumeSpecName: "var-lock") pod "fc3dcbf6-abe1-45ca-992b-4d1c7e419128" (UID: "fc3dcbf6-abe1-45ca-992b-4d1c7e419128"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:36.449919 master-2 kubenswrapper[4776]: I1011 10:33:36.449874 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:36.449919 master-2 kubenswrapper[4776]: I1011 10:33:36.449899 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:36.452262 master-2 kubenswrapper[4776]: I1011 10:33:36.452144 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fc3dcbf6-abe1-45ca-992b-4d1c7e419128" (UID: "fc3dcbf6-abe1-45ca-992b-4d1c7e419128"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:33:36.550987 master-2 kubenswrapper[4776]: I1011 10:33:36.550848 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc3dcbf6-abe1-45ca-992b-4d1c7e419128-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:36.972350 master-2 kubenswrapper[4776]: I1011 10:33:36.972284 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:36.972350 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:36.972350 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:36.972350 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:36.973096 master-2 kubenswrapper[4776]: I1011 10:33:36.972436 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:36.974176 master-2 kubenswrapper[4776]: I1011 10:33:36.974112 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-2" event={"ID":"fc3dcbf6-abe1-45ca-992b-4d1c7e419128","Type":"ContainerDied","Data":"b7e2bd31994c8a8b44562a2051be2b5813378c028e80cc6d98912e4ac9acbe2f"} Oct 11 10:33:36.974497 master-2 kubenswrapper[4776]: I1011 10:33:36.974190 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7e2bd31994c8a8b44562a2051be2b5813378c028e80cc6d98912e4ac9acbe2f" Oct 11 10:33:36.974497 master-2 kubenswrapper[4776]: I1011 10:33:36.974270 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-2" Oct 11 10:33:36.974838 master-2 kubenswrapper[4776]: I1011 10:33:36.974600 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:33:37.970657 master-2 kubenswrapper[4776]: I1011 10:33:37.970544 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:37.970657 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:37.970657 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:37.970657 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:37.970657 master-2 kubenswrapper[4776]: I1011 10:33:37.970623 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:38.426206 master-2 kubenswrapper[4776]: I1011 10:33:38.426115 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2"] Oct 11 10:33:38.426754 master-2 kubenswrapper[4776]: E1011 10:33:38.426709 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3dcbf6-abe1-45ca-992b-4d1c7e419128" containerName="installer" Oct 11 10:33:38.426754 master-2 kubenswrapper[4776]: I1011 10:33:38.426749 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3dcbf6-abe1-45ca-992b-4d1c7e419128" containerName="installer" Oct 11 10:33:38.427044 master-2 kubenswrapper[4776]: I1011 10:33:38.427002 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc3dcbf6-abe1-45ca-992b-4d1c7e419128" containerName="installer" Oct 11 10:33:38.428154 master-2 kubenswrapper[4776]: I1011 10:33:38.428102 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:38.433020 master-2 kubenswrapper[4776]: I1011 10:33:38.432840 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"openshift-service-ca.crt" Oct 11 10:33:38.433490 master-2 kubenswrapper[4776]: I1011 10:33:38.432967 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Oct 11 10:33:38.438830 master-2 kubenswrapper[4776]: I1011 10:33:38.438764 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2"] Oct 11 10:33:38.582082 master-2 kubenswrapper[4776]: I1011 10:33:38.581980 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfzf9\" (UniqueName: \"kubernetes.io/projected/c76a7758-6688-4e6c-a01a-c3e29db3c134-kube-api-access-bfzf9\") pod \"openshift-kube-scheduler-guard-master-2\" (UID: \"c76a7758-6688-4e6c-a01a-c3e29db3c134\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:38.683365 master-2 kubenswrapper[4776]: I1011 10:33:38.683183 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfzf9\" (UniqueName: \"kubernetes.io/projected/c76a7758-6688-4e6c-a01a-c3e29db3c134-kube-api-access-bfzf9\") pod \"openshift-kube-scheduler-guard-master-2\" (UID: \"c76a7758-6688-4e6c-a01a-c3e29db3c134\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:38.710058 master-2 kubenswrapper[4776]: I1011 10:33:38.709962 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfzf9\" (UniqueName: \"kubernetes.io/projected/c76a7758-6688-4e6c-a01a-c3e29db3c134-kube-api-access-bfzf9\") pod \"openshift-kube-scheduler-guard-master-2\" (UID: \"c76a7758-6688-4e6c-a01a-c3e29db3c134\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:38.802983 master-2 kubenswrapper[4776]: I1011 10:33:38.802872 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:38.969583 master-2 kubenswrapper[4776]: I1011 10:33:38.969463 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:38.969583 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:38.969583 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:38.969583 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:38.969583 master-2 kubenswrapper[4776]: I1011 10:33:38.969515 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:39.265369 master-2 kubenswrapper[4776]: I1011 10:33:39.265287 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2"] Oct 11 10:33:39.971465 master-2 kubenswrapper[4776]: I1011 10:33:39.971349 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:39.971465 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:39.971465 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:39.971465 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:39.971465 master-2 kubenswrapper[4776]: I1011 10:33:39.971458 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:39.999075 master-2 kubenswrapper[4776]: I1011 10:33:39.998961 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" event={"ID":"c76a7758-6688-4e6c-a01a-c3e29db3c134","Type":"ContainerStarted","Data":"20056c73232015d79ef714b5dad538c641ef391a0cd27dd4dc7ed866bc33b1e0"} Oct 11 10:33:39.999075 master-2 kubenswrapper[4776]: I1011 10:33:39.999043 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" event={"ID":"c76a7758-6688-4e6c-a01a-c3e29db3c134","Type":"ContainerStarted","Data":"5fd0ea97c1803fb86d9cb87015cc2e41104b2914dc612d7d83cad62059528472"} Oct 11 10:33:39.999386 master-2 kubenswrapper[4776]: I1011 10:33:39.999303 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:40.006570 master-2 kubenswrapper[4776]: I1011 10:33:40.006531 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:33:40.023843 master-2 kubenswrapper[4776]: I1011 10:33:40.023652 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" podStartSLOduration=2.023630833 podStartE2EDuration="2.023630833s" podCreationTimestamp="2025-10-11 10:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:40.021081416 +0000 UTC m=+454.805508165" watchObservedRunningTime="2025-10-11 10:33:40.023630833 +0000 UTC m=+454.808057552" Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: I1011 10:33:40.498984 4776 patch_prober.go:28] interesting pod/apiserver-777cc846dc-729nm container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:33:40.499056 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:33:40.500203 master-2 kubenswrapper[4776]: I1011 10:33:40.499071 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-777cc846dc-729nm" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:40.969770 master-2 kubenswrapper[4776]: I1011 10:33:40.969613 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:40.969770 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:40.969770 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:40.969770 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:40.970252 master-2 kubenswrapper[4776]: I1011 10:33:40.969805 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:41.971991 master-2 kubenswrapper[4776]: I1011 10:33:41.971875 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:41.971991 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:41.971991 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:41.971991 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:41.972916 master-2 kubenswrapper[4776]: I1011 10:33:41.972006 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:42.970734 master-2 kubenswrapper[4776]: I1011 10:33:42.970626 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:42.970734 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:42.970734 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:42.970734 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:42.971168 master-2 kubenswrapper[4776]: I1011 10:33:42.970771 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:43.475244 master-2 kubenswrapper[4776]: I1011 10:33:43.475159 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:33:43.970755 master-2 kubenswrapper[4776]: I1011 10:33:43.970629 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:43.970755 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:43.970755 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:43.970755 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:43.971184 master-2 kubenswrapper[4776]: I1011 10:33:43.970788 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:44.750960 master-2 kubenswrapper[4776]: E1011 10:33:44.750850 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-conmon-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.753222 master-2 kubenswrapper[4776]: E1011 10:33:44.753162 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-conmon-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.753373 master-2 kubenswrapper[4776]: E1011 10:33:44.753234 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.756950 master-2 kubenswrapper[4776]: E1011 10:33:44.756890 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-conmon-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podfc3dcbf6_abe1_45ca_992b_4d1c7e419128.slice/crio-1c42b2aad8d56f14bb8f7731268d9b68d7e1d8d9d1d9e5f2c2ae8c23a10aa4cb.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.757294 master-2 kubenswrapper[4776]: E1011 10:33:44.757250 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-conmon-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.757403 master-2 kubenswrapper[4776]: E1011 10:33:44.757360 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-conmon-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.757403 master-2 kubenswrapper[4776]: E1011 10:33:44.757395 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-conmon-2486353b3b6462d5e98cb24747afeeaf5ae72f91e0a411ae6b701751ae441123.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd7d02073_00a3_41a2_8ca4_6932819886b8.slice/crio-d98feaf58855515cb03ea114d88013a7fa582c51b05e2df18b432bce9d395acf\": RecentStats: unable to find data in memory cache]" Oct 11 10:33:44.970905 master-2 kubenswrapper[4776]: I1011 10:33:44.969735 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:44.970905 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:44.970905 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:44.970905 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:44.970905 master-2 kubenswrapper[4776]: I1011 10:33:44.969797 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:45.028664 master-2 kubenswrapper[4776]: I1011 10:33:45.028594 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2"] Oct 11 10:33:45.038078 master-2 kubenswrapper[4776]: I1011 10:33:45.038026 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a156e42-88da-4ce6-9995-6865609e2711" containerID="90d6acbfbe353ba98c33d9d9275a952ddeabac687eed9e519947f935a2f44edf" exitCode=0 Oct 11 10:33:45.038288 master-2 kubenswrapper[4776]: I1011 10:33:45.038185 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerDied","Data":"90d6acbfbe353ba98c33d9d9275a952ddeabac687eed9e519947f935a2f44edf"} Oct 11 10:33:45.038444 master-2 kubenswrapper[4776]: I1011 10:33:45.038415 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-777cc846dc-729nm" event={"ID":"3a156e42-88da-4ce6-9995-6865609e2711","Type":"ContainerDied","Data":"ee25a7d80d4e6c2818c35f57b04d647f323d2db779f406477d1f90d328d3f868"} Oct 11 10:33:45.038555 master-2 kubenswrapper[4776]: I1011 10:33:45.038536 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee25a7d80d4e6c2818c35f57b04d647f323d2db779f406477d1f90d328d3f868" Oct 11 10:33:45.069081 master-2 kubenswrapper[4776]: I1011 10:33:45.069042 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:33:45.182942 master-2 kubenswrapper[4776]: I1011 10:33:45.182864 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-node-pullsecrets\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.182978 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-audit\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.182999 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-config\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183017 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-trusted-ca-bundle\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183041 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-etcd-serving-ca\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183065 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-serving-cert\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183079 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-encryption-config\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183099 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-image-import-ca\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183075 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:45.183140 master-2 kubenswrapper[4776]: I1011 10:33:45.183142 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:33:45.183515 master-2 kubenswrapper[4776]: I1011 10:33:45.183114 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-audit-dir\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183515 master-2 kubenswrapper[4776]: I1011 10:33:45.183280 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qgjs\" (UniqueName: \"kubernetes.io/projected/3a156e42-88da-4ce6-9995-6865609e2711-kube-api-access-9qgjs\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.183515 master-2 kubenswrapper[4776]: I1011 10:33:45.183425 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-etcd-client\") pod \"3a156e42-88da-4ce6-9995-6865609e2711\" (UID: \"3a156e42-88da-4ce6-9995-6865609e2711\") " Oct 11 10:33:45.184113 master-2 kubenswrapper[4776]: I1011 10:33:45.184059 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:45.184213 master-2 kubenswrapper[4776]: I1011 10:33:45.184135 4776 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-node-pullsecrets\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.184213 master-2 kubenswrapper[4776]: I1011 10:33:45.184151 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a156e42-88da-4ce6-9995-6865609e2711-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.184608 master-2 kubenswrapper[4776]: I1011 10:33:45.184562 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-config" (OuterVolumeSpecName: "config") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:45.184608 master-2 kubenswrapper[4776]: I1011 10:33:45.184567 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:45.184747 master-2 kubenswrapper[4776]: I1011 10:33:45.184656 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-audit" (OuterVolumeSpecName: "audit") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:45.184848 master-2 kubenswrapper[4776]: I1011 10:33:45.184817 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:33:45.186125 master-2 kubenswrapper[4776]: I1011 10:33:45.186092 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:33:45.186419 master-2 kubenswrapper[4776]: I1011 10:33:45.186386 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:33:45.186551 master-2 kubenswrapper[4776]: I1011 10:33:45.186431 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:33:45.186999 master-2 kubenswrapper[4776]: I1011 10:33:45.186929 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a156e42-88da-4ce6-9995-6865609e2711-kube-api-access-9qgjs" (OuterVolumeSpecName: "kube-api-access-9qgjs") pod "3a156e42-88da-4ce6-9995-6865609e2711" (UID: "3a156e42-88da-4ce6-9995-6865609e2711"). InnerVolumeSpecName "kube-api-access-9qgjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286268 4776 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-audit\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286358 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286399 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286417 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286429 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286441 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286452 4776 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a156e42-88da-4ce6-9995-6865609e2711-image-import-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286465 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qgjs\" (UniqueName: \"kubernetes.io/projected/3a156e42-88da-4ce6-9995-6865609e2711-kube-api-access-9qgjs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.286491 master-2 kubenswrapper[4776]: I1011 10:33:45.286477 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a156e42-88da-4ce6-9995-6865609e2711-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:33:45.969813 master-2 kubenswrapper[4776]: I1011 10:33:45.969654 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:45.969813 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:45.969813 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:45.969813 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:45.971033 master-2 kubenswrapper[4776]: I1011 10:33:45.969812 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:46.043877 master-2 kubenswrapper[4776]: I1011 10:33:46.043786 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-777cc846dc-729nm" Oct 11 10:33:46.093401 master-2 kubenswrapper[4776]: I1011 10:33:46.093319 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-729nm"] Oct 11 10:33:46.098794 master-2 kubenswrapper[4776]: I1011 10:33:46.098728 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-777cc846dc-729nm"] Oct 11 10:33:46.971571 master-2 kubenswrapper[4776]: I1011 10:33:46.971480 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:46.971571 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:46.971571 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:46.971571 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:46.971571 master-2 kubenswrapper[4776]: I1011 10:33:46.971560 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:47.802701 master-2 kubenswrapper[4776]: I1011 10:33:47.802541 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:33:47.971028 master-2 kubenswrapper[4776]: I1011 10:33:47.970942 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:47.971028 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:47.971028 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:47.971028 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:47.971028 master-2 kubenswrapper[4776]: I1011 10:33:47.971023 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:48.071095 master-2 kubenswrapper[4776]: I1011 10:33:48.070895 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a156e42-88da-4ce6-9995-6865609e2711" path="/var/lib/kubelet/pods/3a156e42-88da-4ce6-9995-6865609e2711/volumes" Oct 11 10:33:48.822866 master-2 kubenswrapper[4776]: I1011 10:33:48.822796 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-h5nlf"] Oct 11 10:33:48.823183 master-2 kubenswrapper[4776]: E1011 10:33:48.823107 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" Oct 11 10:33:48.823183 master-2 kubenswrapper[4776]: I1011 10:33:48.823127 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" Oct 11 10:33:48.823183 master-2 kubenswrapper[4776]: E1011 10:33:48.823150 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver-check-endpoints" Oct 11 10:33:48.823183 master-2 kubenswrapper[4776]: I1011 10:33:48.823163 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver-check-endpoints" Oct 11 10:33:48.823447 master-2 kubenswrapper[4776]: E1011 10:33:48.823178 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="fix-audit-permissions" Oct 11 10:33:48.823447 master-2 kubenswrapper[4776]: I1011 10:33:48.823268 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="fix-audit-permissions" Oct 11 10:33:48.823575 master-2 kubenswrapper[4776]: I1011 10:33:48.823467 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver-check-endpoints" Oct 11 10:33:48.823575 master-2 kubenswrapper[4776]: I1011 10:33:48.823495 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a156e42-88da-4ce6-9995-6865609e2711" containerName="openshift-apiserver" Oct 11 10:33:48.824924 master-2 kubenswrapper[4776]: I1011 10:33:48.824875 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.829538 master-2 kubenswrapper[4776]: I1011 10:33:48.829482 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:33:48.829538 master-2 kubenswrapper[4776]: I1011 10:33:48.829529 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:33:48.829893 master-2 kubenswrapper[4776]: I1011 10:33:48.829735 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:33:48.829970 master-2 kubenswrapper[4776]: I1011 10:33:48.829900 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:33:48.830189 master-2 kubenswrapper[4776]: I1011 10:33:48.830112 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:33:48.831184 master-2 kubenswrapper[4776]: I1011 10:33:48.831064 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:33:48.831184 master-2 kubenswrapper[4776]: I1011 10:33:48.831087 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:33:48.831847 master-2 kubenswrapper[4776]: I1011 10:33:48.831783 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:33:48.832019 master-2 kubenswrapper[4776]: I1011 10:33:48.831890 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:33:48.840600 master-2 kubenswrapper[4776]: I1011 10:33:48.840528 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-h5nlf"] Oct 11 10:33:48.843487 master-2 kubenswrapper[4776]: I1011 10:33:48.843416 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:33:48.937058 master-2 kubenswrapper[4776]: I1011 10:33:48.936974 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-trusted-ca-bundle\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937058 master-2 kubenswrapper[4776]: I1011 10:33:48.937057 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q8hn\" (UniqueName: \"kubernetes.io/projected/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-kube-api-access-5q8hn\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937439 master-2 kubenswrapper[4776]: I1011 10:33:48.937127 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-config\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937439 master-2 kubenswrapper[4776]: I1011 10:33:48.937185 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-serving-cert\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937439 master-2 kubenswrapper[4776]: I1011 10:33:48.937279 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-node-pullsecrets\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937439 master-2 kubenswrapper[4776]: I1011 10:33:48.937339 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-image-import-ca\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937697 master-2 kubenswrapper[4776]: I1011 10:33:48.937453 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-serving-ca\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937697 master-2 kubenswrapper[4776]: I1011 10:33:48.937539 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937697 master-2 kubenswrapper[4776]: I1011 10:33:48.937581 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit-dir\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937918 master-2 kubenswrapper[4776]: I1011 10:33:48.937740 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-client\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.937918 master-2 kubenswrapper[4776]: I1011 10:33:48.937779 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-encryption-config\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:48.970092 master-2 kubenswrapper[4776]: I1011 10:33:48.970008 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:48.970092 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:48.970092 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:48.970092 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:48.970353 master-2 kubenswrapper[4776]: I1011 10:33:48.970109 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:49.039109 master-2 kubenswrapper[4776]: I1011 10:33:49.039019 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-serving-ca\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039109 master-2 kubenswrapper[4776]: I1011 10:33:49.039104 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039377 master-2 kubenswrapper[4776]: I1011 10:33:49.039143 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit-dir\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039377 master-2 kubenswrapper[4776]: I1011 10:33:49.039212 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-client\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039377 master-2 kubenswrapper[4776]: I1011 10:33:49.039245 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-encryption-config\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039377 master-2 kubenswrapper[4776]: I1011 10:33:49.039335 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q8hn\" (UniqueName: \"kubernetes.io/projected/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-kube-api-access-5q8hn\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039377 master-2 kubenswrapper[4776]: I1011 10:33:49.039375 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-trusted-ca-bundle\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039727 master-2 kubenswrapper[4776]: I1011 10:33:49.039410 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-config\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039727 master-2 kubenswrapper[4776]: I1011 10:33:49.039438 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-serving-cert\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039727 master-2 kubenswrapper[4776]: I1011 10:33:49.039461 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-node-pullsecrets\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039727 master-2 kubenswrapper[4776]: I1011 10:33:49.039480 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-image-import-ca\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039727 master-2 kubenswrapper[4776]: I1011 10:33:49.039546 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit-dir\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.039727 master-2 kubenswrapper[4776]: I1011 10:33:49.039715 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-node-pullsecrets\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.040177 master-2 kubenswrapper[4776]: I1011 10:33:49.040030 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.040553 master-2 kubenswrapper[4776]: I1011 10:33:49.040471 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-image-import-ca\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.041085 master-2 kubenswrapper[4776]: I1011 10:33:49.041002 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-config\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.041988 master-2 kubenswrapper[4776]: I1011 10:33:49.041933 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-serving-ca\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.042172 master-2 kubenswrapper[4776]: I1011 10:33:49.042111 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-trusted-ca-bundle\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.043618 master-2 kubenswrapper[4776]: I1011 10:33:49.043540 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-serving-cert\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.044579 master-2 kubenswrapper[4776]: I1011 10:33:49.044516 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-encryption-config\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.064025 master-2 kubenswrapper[4776]: I1011 10:33:49.063934 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-client\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.070884 master-2 kubenswrapper[4776]: I1011 10:33:49.070826 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q8hn\" (UniqueName: \"kubernetes.io/projected/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-kube-api-access-5q8hn\") pod \"apiserver-7845cf54d8-h5nlf\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.191826 master-2 kubenswrapper[4776]: I1011 10:33:49.191723 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:49.668366 master-2 kubenswrapper[4776]: I1011 10:33:49.668298 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-h5nlf"] Oct 11 10:33:49.676237 master-2 kubenswrapper[4776]: W1011 10:33:49.676168 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c500140_fe5c_4fa2_914b_bb1e0c5758ab.slice/crio-096d8e6920a041962b0dbcc41b2a283a99938e8e8c28669a5a9ec5f599e847be WatchSource:0}: Error finding container 096d8e6920a041962b0dbcc41b2a283a99938e8e8c28669a5a9ec5f599e847be: Status 404 returned error can't find the container with id 096d8e6920a041962b0dbcc41b2a283a99938e8e8c28669a5a9ec5f599e847be Oct 11 10:33:49.969626 master-2 kubenswrapper[4776]: I1011 10:33:49.969582 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:49.969626 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:49.969626 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:49.969626 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:49.969815 master-2 kubenswrapper[4776]: I1011 10:33:49.969640 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:50.075378 master-2 kubenswrapper[4776]: I1011 10:33:50.075308 4776 generic.go:334] "Generic (PLEG): container finished" podID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerID="227b0ea6948a9655dda8b2fd87923ef92a7b65ccb09fd037cc6c580377f3d16c" exitCode=0 Oct 11 10:33:50.075378 master-2 kubenswrapper[4776]: I1011 10:33:50.075370 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerDied","Data":"227b0ea6948a9655dda8b2fd87923ef92a7b65ccb09fd037cc6c580377f3d16c"} Oct 11 10:33:50.075604 master-2 kubenswrapper[4776]: I1011 10:33:50.075406 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerStarted","Data":"096d8e6920a041962b0dbcc41b2a283a99938e8e8c28669a5a9ec5f599e847be"} Oct 11 10:33:50.970263 master-2 kubenswrapper[4776]: I1011 10:33:50.970207 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:50.970263 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:50.970263 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:50.970263 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:50.970263 master-2 kubenswrapper[4776]: I1011 10:33:50.970267 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:51.083105 master-2 kubenswrapper[4776]: I1011 10:33:51.083025 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerStarted","Data":"33b8451dee3f8d5ed8e144b04e3c4757d199f647e9b246655c277be3cef812a5"} Oct 11 10:33:51.083105 master-2 kubenswrapper[4776]: I1011 10:33:51.083082 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerStarted","Data":"a1a9c7629f1fd873d3ec9d24b009ce28e04c1ae342e924bb98a2d69fd1fdcc5f"} Oct 11 10:33:51.116658 master-2 kubenswrapper[4776]: I1011 10:33:51.116569 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podStartSLOduration=60.116549588 podStartE2EDuration="1m0.116549588s" podCreationTimestamp="2025-10-11 10:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:33:51.1136065 +0000 UTC m=+465.898033209" watchObservedRunningTime="2025-10-11 10:33:51.116549588 +0000 UTC m=+465.900976297" Oct 11 10:33:51.970009 master-2 kubenswrapper[4776]: I1011 10:33:51.969931 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:51.970009 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:51.970009 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:51.970009 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:51.970606 master-2 kubenswrapper[4776]: I1011 10:33:51.970012 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:52.969529 master-2 kubenswrapper[4776]: I1011 10:33:52.969470 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:52.969529 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:52.969529 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:52.969529 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:52.969529 master-2 kubenswrapper[4776]: I1011 10:33:52.969531 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:53.970649 master-2 kubenswrapper[4776]: I1011 10:33:53.970531 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:53.970649 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:53.970649 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:53.970649 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:53.971517 master-2 kubenswrapper[4776]: I1011 10:33:53.970668 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:54.193141 master-2 kubenswrapper[4776]: I1011 10:33:54.193060 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:54.193141 master-2 kubenswrapper[4776]: I1011 10:33:54.193131 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:54.206223 master-2 kubenswrapper[4776]: I1011 10:33:54.206168 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:54.969582 master-2 kubenswrapper[4776]: I1011 10:33:54.969513 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:54.969582 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:54.969582 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:54.969582 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:54.969919 master-2 kubenswrapper[4776]: I1011 10:33:54.969609 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:55.116468 master-2 kubenswrapper[4776]: I1011 10:33:55.116427 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:33:55.970627 master-2 kubenswrapper[4776]: I1011 10:33:55.970503 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:55.970627 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:55.970627 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:55.970627 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:55.971614 master-2 kubenswrapper[4776]: I1011 10:33:55.970646 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:56.970269 master-2 kubenswrapper[4776]: I1011 10:33:56.970197 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:56.970269 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:56.970269 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:56.970269 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:56.970882 master-2 kubenswrapper[4776]: I1011 10:33:56.970276 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:57.976271 master-2 kubenswrapper[4776]: I1011 10:33:57.976207 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:57.976271 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:57.976271 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:57.976271 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:57.977010 master-2 kubenswrapper[4776]: I1011 10:33:57.976295 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:58.970029 master-2 kubenswrapper[4776]: I1011 10:33:58.969933 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:58.970029 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:58.970029 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:58.970029 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:58.970291 master-2 kubenswrapper[4776]: I1011 10:33:58.970046 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:33:59.970342 master-2 kubenswrapper[4776]: I1011 10:33:59.970156 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:33:59.970342 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:33:59.970342 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:33:59.970342 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:33:59.971303 master-2 kubenswrapper[4776]: I1011 10:33:59.970361 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:00.970466 master-2 kubenswrapper[4776]: I1011 10:34:00.970369 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:00.970466 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:00.970466 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:00.970466 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:00.971478 master-2 kubenswrapper[4776]: I1011 10:34:00.970477 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:01.970512 master-2 kubenswrapper[4776]: I1011 10:34:01.970431 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:01.970512 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:01.970512 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:01.970512 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:01.971665 master-2 kubenswrapper[4776]: I1011 10:34:01.970520 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:02.971695 master-2 kubenswrapper[4776]: I1011 10:34:02.971603 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:02.971695 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:02.971695 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:02.971695 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:02.972268 master-2 kubenswrapper[4776]: I1011 10:34:02.971722 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:03.970507 master-2 kubenswrapper[4776]: I1011 10:34:03.970444 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:03.970507 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:03.970507 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:03.970507 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:03.970798 master-2 kubenswrapper[4776]: I1011 10:34:03.970508 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:04.969272 master-2 kubenswrapper[4776]: I1011 10:34:04.969210 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:04.969272 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:04.969272 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:04.969272 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:04.969814 master-2 kubenswrapper[4776]: I1011 10:34:04.969303 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:05.970011 master-2 kubenswrapper[4776]: I1011 10:34:05.969937 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:05.970011 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:05.970011 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:05.970011 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:05.970731 master-2 kubenswrapper[4776]: I1011 10:34:05.970030 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:06.970214 master-2 kubenswrapper[4776]: I1011 10:34:06.970095 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:06.970214 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:06.970214 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:06.970214 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:06.970214 master-2 kubenswrapper[4776]: I1011 10:34:06.970199 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:07.970240 master-2 kubenswrapper[4776]: I1011 10:34:07.970154 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:07.970240 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:07.970240 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:07.970240 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:07.971375 master-2 kubenswrapper[4776]: I1011 10:34:07.970247 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:08.969508 master-2 kubenswrapper[4776]: I1011 10:34:08.969455 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:08.969508 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:08.969508 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:08.969508 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:08.969801 master-2 kubenswrapper[4776]: I1011 10:34:08.969546 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:09.969501 master-2 kubenswrapper[4776]: I1011 10:34:09.969429 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:09.969501 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:09.969501 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:09.969501 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:09.970136 master-2 kubenswrapper[4776]: I1011 10:34:09.969537 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:10.970605 master-2 kubenswrapper[4776]: I1011 10:34:10.970509 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:10.970605 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:10.970605 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:10.970605 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:10.971203 master-2 kubenswrapper[4776]: I1011 10:34:10.970623 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:11.969612 master-2 kubenswrapper[4776]: I1011 10:34:11.969554 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:11.969612 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:11.969612 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:11.969612 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:11.970251 master-2 kubenswrapper[4776]: I1011 10:34:11.970207 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:12.969511 master-2 kubenswrapper[4776]: I1011 10:34:12.969403 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:12.969511 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:12.969511 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:12.969511 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:12.969511 master-2 kubenswrapper[4776]: I1011 10:34:12.969466 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:13.970283 master-2 kubenswrapper[4776]: I1011 10:34:13.970192 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:13.970283 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:13.970283 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:13.970283 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:13.970283 master-2 kubenswrapper[4776]: I1011 10:34:13.970282 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:14.970085 master-2 kubenswrapper[4776]: I1011 10:34:14.969978 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:14.970085 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:14.970085 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:14.970085 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:14.970085 master-2 kubenswrapper[4776]: I1011 10:34:14.970054 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:15.969861 master-2 kubenswrapper[4776]: I1011 10:34:15.969741 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:15.969861 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:15.969861 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:15.969861 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:15.970419 master-2 kubenswrapper[4776]: I1011 10:34:15.969856 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:16.970774 master-2 kubenswrapper[4776]: I1011 10:34:16.970632 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:34:16.970774 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:34:16.970774 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:34:16.970774 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:34:16.971979 master-2 kubenswrapper[4776]: I1011 10:34:16.970785 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:34:16.971979 master-2 kubenswrapper[4776]: I1011 10:34:16.970868 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:34:16.971979 master-2 kubenswrapper[4776]: I1011 10:34:16.971816 4776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"532aa417a81b450c3ee53605926d217d88d13c85c6a1d9e5ea21cc1e35ca9346"} pod="openshift-ingress/router-default-5ddb89f76-57kcw" containerMessage="Container router failed startup probe, will be restarted" Oct 11 10:34:16.971979 master-2 kubenswrapper[4776]: I1011 10:34:16.971884 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" containerID="cri-o://532aa417a81b450c3ee53605926d217d88d13c85c6a1d9e5ea21cc1e35ca9346" gracePeriod=3600 Oct 11 10:34:17.793744 master-2 kubenswrapper[4776]: I1011 10:34:17.793602 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:34:19.003863 master-2 kubenswrapper[4776]: I1011 10:34:19.003735 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-5-master-2"] Oct 11 10:34:19.005432 master-2 kubenswrapper[4776]: I1011 10:34:19.005359 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.048481 master-2 kubenswrapper[4776]: I1011 10:34:19.022981 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-5-master-2"] Oct 11 10:34:19.165332 master-2 kubenswrapper[4776]: I1011 10:34:19.165263 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.165332 master-2 kubenswrapper[4776]: I1011 10:34:19.165340 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-var-lock\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.165568 master-2 kubenswrapper[4776]: I1011 10:34:19.165481 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeec22d-9309-4efd-bbc0-f44c750a258c-kube-api-access\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.267121 master-2 kubenswrapper[4776]: I1011 10:34:19.266975 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeec22d-9309-4efd-bbc0-f44c750a258c-kube-api-access\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.267305 master-2 kubenswrapper[4776]: I1011 10:34:19.267135 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.267305 master-2 kubenswrapper[4776]: I1011 10:34:19.267173 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-var-lock\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.267305 master-2 kubenswrapper[4776]: I1011 10:34:19.267259 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-var-lock\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.267465 master-2 kubenswrapper[4776]: I1011 10:34:19.267315 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.293558 master-2 kubenswrapper[4776]: I1011 10:34:19.293461 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeec22d-9309-4efd-bbc0-f44c750a258c-kube-api-access\") pod \"installer-5-master-2\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.365240 master-2 kubenswrapper[4776]: I1011 10:34:19.365166 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:19.829544 master-2 kubenswrapper[4776]: I1011 10:34:19.829487 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-5-master-2"] Oct 11 10:34:19.842734 master-2 kubenswrapper[4776]: W1011 10:34:19.842640 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podebeec22d_9309_4efd_bbc0_f44c750a258c.slice/crio-6865f7f3eb804d27ba2426a02d0ac86e934a61a183cb3434647c8b9458b1b6af WatchSource:0}: Error finding container 6865f7f3eb804d27ba2426a02d0ac86e934a61a183cb3434647c8b9458b1b6af: Status 404 returned error can't find the container with id 6865f7f3eb804d27ba2426a02d0ac86e934a61a183cb3434647c8b9458b1b6af Oct 11 10:34:20.286259 master-2 kubenswrapper[4776]: I1011 10:34:20.286173 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-2" event={"ID":"ebeec22d-9309-4efd-bbc0-f44c750a258c","Type":"ContainerStarted","Data":"25ad594b9284fd2089fd6abfaa970257ef0a465b5e9177e3d753cb32feaf3eb1"} Oct 11 10:34:20.286259 master-2 kubenswrapper[4776]: I1011 10:34:20.286234 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-2" event={"ID":"ebeec22d-9309-4efd-bbc0-f44c750a258c","Type":"ContainerStarted","Data":"6865f7f3eb804d27ba2426a02d0ac86e934a61a183cb3434647c8b9458b1b6af"} Oct 11 10:34:20.309345 master-2 kubenswrapper[4776]: I1011 10:34:20.309239 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-5-master-2" podStartSLOduration=2.309213625 podStartE2EDuration="2.309213625s" podCreationTimestamp="2025-10-11 10:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:34:20.303850683 +0000 UTC m=+495.088277462" watchObservedRunningTime="2025-10-11 10:34:20.309213625 +0000 UTC m=+495.093640334" Oct 11 10:34:24.323111 master-2 kubenswrapper[4776]: I1011 10:34:24.323029 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:34:42.481193 master-2 kubenswrapper[4776]: E1011 10:34:42.480922 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" podUID="17bef070-1a9d-4090-b97a-7ce2c1c93b19" Oct 11 10:34:43.471706 master-2 kubenswrapper[4776]: I1011 10:34:43.471598 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:34:43.762505 master-2 kubenswrapper[4776]: I1011 10:34:43.762308 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") pod \"route-controller-manager-67d4d4d6d8-nn4kb\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:34:43.763030 master-2 kubenswrapper[4776]: E1011 10:34:43.762626 4776 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:34:43.763030 master-2 kubenswrapper[4776]: E1011 10:34:43.762781 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca podName:17bef070-1a9d-4090-b97a-7ce2c1c93b19 nodeName:}" failed. No retries permitted until 2025-10-11 10:36:45.762744266 +0000 UTC m=+640.547171015 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca") pod "route-controller-manager-67d4d4d6d8-nn4kb" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19") : configmap "client-ca" not found Oct 11 10:34:45.498317 master-2 kubenswrapper[4776]: E1011 10:34:45.498184 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" podUID="cacd2d60-e8a5-450f-a4ad-dfc0194e3325" Oct 11 10:34:46.496580 master-2 kubenswrapper[4776]: I1011 10:34:46.496492 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:34:46.711710 master-2 kubenswrapper[4776]: I1011 10:34:46.711629 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") pod \"controller-manager-546b64dc7b-pdhmc\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:34:46.713184 master-2 kubenswrapper[4776]: E1011 10:34:46.711823 4776 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 11 10:34:46.713184 master-2 kubenswrapper[4776]: E1011 10:34:46.712157 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca podName:cacd2d60-e8a5-450f-a4ad-dfc0194e3325 nodeName:}" failed. No retries permitted until 2025-10-11 10:36:48.712128363 +0000 UTC m=+643.496555102 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca") pod "controller-manager-546b64dc7b-pdhmc" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325") : configmap "client-ca" not found Oct 11 10:34:47.805321 master-2 kubenswrapper[4776]: I1011 10:34:47.805260 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:34:51.554632 master-2 kubenswrapper[4776]: I1011 10:34:51.554562 4776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:34:51.555267 master-2 kubenswrapper[4776]: E1011 10:34:51.554851 4776 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/etcd-pod.yaml\": /etc/kubernetes/manifests/etcd-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Oct 11 10:34:51.555267 master-2 kubenswrapper[4776]: I1011 10:34:51.554907 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcdctl" containerID="cri-o://1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8" gracePeriod=30 Oct 11 10:34:51.555267 master-2 kubenswrapper[4776]: I1011 10:34:51.554955 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-readyz" containerID="cri-o://352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7" gracePeriod=30 Oct 11 10:34:51.555267 master-2 kubenswrapper[4776]: I1011 10:34:51.554975 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-rev" containerID="cri-o://2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d" gracePeriod=30 Oct 11 10:34:51.555267 master-2 kubenswrapper[4776]: I1011 10:34:51.554982 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-metrics" containerID="cri-o://e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6" gracePeriod=30 Oct 11 10:34:51.555267 master-2 kubenswrapper[4776]: I1011 10:34:51.555176 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" containerID="cri-o://4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737" gracePeriod=30 Oct 11 10:34:51.565988 master-2 kubenswrapper[4776]: I1011 10:34:51.565630 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:34:51.566177 master-2 kubenswrapper[4776]: E1011 10:34:51.566130 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-resources-copy" Oct 11 10:34:51.566177 master-2 kubenswrapper[4776]: I1011 10:34:51.566158 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-resources-copy" Oct 11 10:34:51.566177 master-2 kubenswrapper[4776]: E1011 10:34:51.566175 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-rev" Oct 11 10:34:51.566281 master-2 kubenswrapper[4776]: I1011 10:34:51.566187 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-rev" Oct 11 10:34:51.566281 master-2 kubenswrapper[4776]: E1011 10:34:51.566207 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-ensure-env-vars" Oct 11 10:34:51.566281 master-2 kubenswrapper[4776]: I1011 10:34:51.566219 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-ensure-env-vars" Oct 11 10:34:51.566281 master-2 kubenswrapper[4776]: E1011 10:34:51.566236 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="setup" Oct 11 10:34:51.566281 master-2 kubenswrapper[4776]: I1011 10:34:51.566245 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="setup" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: E1011 10:34:51.566259 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: I1011 10:34:51.566306 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: E1011 10:34:51.566327 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcdctl" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: I1011 10:34:51.566339 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcdctl" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: E1011 10:34:51.566352 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-readyz" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: I1011 10:34:51.566363 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-readyz" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: E1011 10:34:51.566378 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: I1011 10:34:51.566388 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: E1011 10:34:51.566404 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-metrics" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: I1011 10:34:51.566415 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-metrics" Oct 11 10:34:51.566420 master-2 kubenswrapper[4776]: E1011 10:34:51.566428 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566440 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566627 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566644 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-readyz" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566656 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-rev" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566673 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd-metrics" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566690 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566727 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcd" Oct 11 10:34:51.566801 master-2 kubenswrapper[4776]: I1011 10:34:51.566746 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c492168afa20f49cb6e3534e1871011b" containerName="etcdctl" Oct 11 10:34:51.596590 master-2 kubenswrapper[4776]: I1011 10:34:51.596531 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-log-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.596668 master-2 kubenswrapper[4776]: I1011 10:34:51.596608 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-cert-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.596668 master-2 kubenswrapper[4776]: I1011 10:34:51.596638 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-usr-local-bin\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.596776 master-2 kubenswrapper[4776]: I1011 10:34:51.596661 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-resource-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.596810 master-2 kubenswrapper[4776]: I1011 10:34:51.596772 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-static-pod-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.596844 master-2 kubenswrapper[4776]: I1011 10:34:51.596810 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-data-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698473 master-2 kubenswrapper[4776]: I1011 10:34:51.698385 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-data-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698619 master-2 kubenswrapper[4776]: I1011 10:34:51.698562 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-data-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698619 master-2 kubenswrapper[4776]: I1011 10:34:51.698604 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-log-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698694 master-2 kubenswrapper[4776]: I1011 10:34:51.698665 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-log-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698758 master-2 kubenswrapper[4776]: I1011 10:34:51.698738 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-cert-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698799 master-2 kubenswrapper[4776]: I1011 10:34:51.698782 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-usr-local-bin\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698849 master-2 kubenswrapper[4776]: I1011 10:34:51.698818 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-resource-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698905 master-2 kubenswrapper[4776]: I1011 10:34:51.698873 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-usr-local-bin\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698954 master-2 kubenswrapper[4776]: I1011 10:34:51.698910 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-cert-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.698987 master-2 kubenswrapper[4776]: I1011 10:34:51.698928 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-static-pod-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.699019 master-2 kubenswrapper[4776]: I1011 10:34:51.698937 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-resource-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:51.699050 master-2 kubenswrapper[4776]: I1011 10:34:51.698888 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-static-pod-dir\") pod \"etcd-master-2\" (UID: \"2c4a583adfee975da84510940117e71a\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:34:52.547299 master-2 kubenswrapper[4776]: I1011 10:34:52.546053 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/1.log" Oct 11 10:34:52.547299 master-2 kubenswrapper[4776]: I1011 10:34:52.546590 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd-rev/0.log" Oct 11 10:34:52.548884 master-2 kubenswrapper[4776]: I1011 10:34:52.548098 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd-metrics/0.log" Oct 11 10:34:52.550390 master-2 kubenswrapper[4776]: I1011 10:34:52.550331 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d" exitCode=2 Oct 11 10:34:52.550390 master-2 kubenswrapper[4776]: I1011 10:34:52.550363 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7" exitCode=0 Oct 11 10:34:52.550587 master-2 kubenswrapper[4776]: I1011 10:34:52.550453 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6" exitCode=2 Oct 11 10:34:52.555549 master-2 kubenswrapper[4776]: I1011 10:34:52.555458 4776 generic.go:334] "Generic (PLEG): container finished" podID="ebeec22d-9309-4efd-bbc0-f44c750a258c" containerID="25ad594b9284fd2089fd6abfaa970257ef0a465b5e9177e3d753cb32feaf3eb1" exitCode=0 Oct 11 10:34:52.556286 master-2 kubenswrapper[4776]: I1011 10:34:52.555544 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-2" event={"ID":"ebeec22d-9309-4efd-bbc0-f44c750a258c","Type":"ContainerDied","Data":"25ad594b9284fd2089fd6abfaa970257ef0a465b5e9177e3d753cb32feaf3eb1"} Oct 11 10:34:53.032000 master-2 kubenswrapper[4776]: I1011 10:34:53.031945 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:34:53.032401 master-2 kubenswrapper[4776]: I1011 10:34:53.032359 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:34:53.951169 master-2 kubenswrapper[4776]: I1011 10:34:53.951084 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:54.034636 master-2 kubenswrapper[4776]: I1011 10:34:54.034534 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-kubelet-dir\") pod \"ebeec22d-9309-4efd-bbc0-f44c750a258c\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " Oct 11 10:34:54.034636 master-2 kubenswrapper[4776]: I1011 10:34:54.034618 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ebeec22d-9309-4efd-bbc0-f44c750a258c" (UID: "ebeec22d-9309-4efd-bbc0-f44c750a258c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:34:54.035167 master-2 kubenswrapper[4776]: I1011 10:34:54.034836 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeec22d-9309-4efd-bbc0-f44c750a258c-kube-api-access\") pod \"ebeec22d-9309-4efd-bbc0-f44c750a258c\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " Oct 11 10:34:54.035167 master-2 kubenswrapper[4776]: I1011 10:34:54.034909 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-var-lock\") pod \"ebeec22d-9309-4efd-bbc0-f44c750a258c\" (UID: \"ebeec22d-9309-4efd-bbc0-f44c750a258c\") " Oct 11 10:34:54.035309 master-2 kubenswrapper[4776]: I1011 10:34:54.035167 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebeec22d-9309-4efd-bbc0-f44c750a258c" (UID: "ebeec22d-9309-4efd-bbc0-f44c750a258c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:34:54.035611 master-2 kubenswrapper[4776]: I1011 10:34:54.035544 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:34:54.035611 master-2 kubenswrapper[4776]: I1011 10:34:54.035595 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeec22d-9309-4efd-bbc0-f44c750a258c-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:34:54.037357 master-2 kubenswrapper[4776]: I1011 10:34:54.037315 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebeec22d-9309-4efd-bbc0-f44c750a258c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ebeec22d-9309-4efd-bbc0-f44c750a258c" (UID: "ebeec22d-9309-4efd-bbc0-f44c750a258c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:34:54.137503 master-2 kubenswrapper[4776]: I1011 10:34:54.137346 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeec22d-9309-4efd-bbc0-f44c750a258c-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:34:54.573913 master-2 kubenswrapper[4776]: I1011 10:34:54.573820 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-2" event={"ID":"ebeec22d-9309-4efd-bbc0-f44c750a258c","Type":"ContainerDied","Data":"6865f7f3eb804d27ba2426a02d0ac86e934a61a183cb3434647c8b9458b1b6af"} Oct 11 10:34:54.573913 master-2 kubenswrapper[4776]: I1011 10:34:54.573871 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6865f7f3eb804d27ba2426a02d0ac86e934a61a183cb3434647c8b9458b1b6af" Oct 11 10:34:54.573913 master-2 kubenswrapper[4776]: I1011 10:34:54.573888 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-2" Oct 11 10:34:58.031231 master-2 kubenswrapper[4776]: I1011 10:34:58.031143 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:34:58.031231 master-2 kubenswrapper[4776]: I1011 10:34:58.031232 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:03.031173 master-2 kubenswrapper[4776]: I1011 10:35:03.031123 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:03.031637 master-2 kubenswrapper[4776]: I1011 10:35:03.031185 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:03.031637 master-2 kubenswrapper[4776]: I1011 10:35:03.031256 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:35:03.031845 master-2 kubenswrapper[4776]: I1011 10:35:03.031779 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:03.031994 master-2 kubenswrapper[4776]: I1011 10:35:03.031871 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:03.646513 master-2 kubenswrapper[4776]: I1011 10:35:03.646410 4776 generic.go:334] "Generic (PLEG): container finished" podID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerID="532aa417a81b450c3ee53605926d217d88d13c85c6a1d9e5ea21cc1e35ca9346" exitCode=0 Oct 11 10:35:03.646513 master-2 kubenswrapper[4776]: I1011 10:35:03.646463 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerDied","Data":"532aa417a81b450c3ee53605926d217d88d13c85c6a1d9e5ea21cc1e35ca9346"} Oct 11 10:35:03.646513 master-2 kubenswrapper[4776]: I1011 10:35:03.646497 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerStarted","Data":"37bb400b73bf025924c7c9c5bb9e1d981b6e77aaa21f8e234850cbe27200bcf9"} Oct 11 10:35:03.646513 master-2 kubenswrapper[4776]: I1011 10:35:03.646517 4776 scope.go:117] "RemoveContainer" containerID="d0cb5ca87b019c6dd7de016a32a463f61a07f9fbd819c59a6476d827605ded9c" Oct 11 10:35:03.967135 master-2 kubenswrapper[4776]: I1011 10:35:03.967018 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:35:03.971262 master-2 kubenswrapper[4776]: I1011 10:35:03.971211 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:03.971262 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:03.971262 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:03.971262 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:03.971418 master-2 kubenswrapper[4776]: I1011 10:35:03.971279 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:04.970772 master-2 kubenswrapper[4776]: I1011 10:35:04.970660 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:04.970772 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:04.970772 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:04.970772 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:04.970772 master-2 kubenswrapper[4776]: I1011 10:35:04.970739 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:05.665279 master-2 kubenswrapper[4776]: I1011 10:35:05.665171 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/2.log" Oct 11 10:35:05.665924 master-2 kubenswrapper[4776]: I1011 10:35:05.665888 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/1.log" Oct 11 10:35:05.666724 master-2 kubenswrapper[4776]: I1011 10:35:05.666368 4776 generic.go:334] "Generic (PLEG): container finished" podID="6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c" containerID="c8328096ff83e7ba5543b3018932260182e624ccc8f4652947efc0b12da9a115" exitCode=1 Oct 11 10:35:05.666724 master-2 kubenswrapper[4776]: I1011 10:35:05.666441 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerDied","Data":"c8328096ff83e7ba5543b3018932260182e624ccc8f4652947efc0b12da9a115"} Oct 11 10:35:05.666724 master-2 kubenswrapper[4776]: I1011 10:35:05.666506 4776 scope.go:117] "RemoveContainer" containerID="9d1e12fc2f0f72ed70b132b8b498f2531a6ae8ae01d1a1a52b35463b0839dedd" Oct 11 10:35:05.667493 master-2 kubenswrapper[4776]: I1011 10:35:05.667437 4776 scope.go:117] "RemoveContainer" containerID="c8328096ff83e7ba5543b3018932260182e624ccc8f4652947efc0b12da9a115" Oct 11 10:35:05.668051 master-2 kubenswrapper[4776]: E1011 10:35:05.667997 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-766ddf4575-wf7mj_openshift-ingress-operator(6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c)\"" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" podUID="6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c" Oct 11 10:35:05.970112 master-2 kubenswrapper[4776]: I1011 10:35:05.969962 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:05.970112 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:05.970112 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:05.970112 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:05.970112 master-2 kubenswrapper[4776]: I1011 10:35:05.970047 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:06.676108 master-2 kubenswrapper[4776]: I1011 10:35:06.676051 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/2.log" Oct 11 10:35:06.971024 master-2 kubenswrapper[4776]: I1011 10:35:06.970831 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:06.971024 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:06.971024 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:06.971024 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:06.971024 master-2 kubenswrapper[4776]: I1011 10:35:06.970938 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:07.967256 master-2 kubenswrapper[4776]: I1011 10:35:07.967189 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:35:07.969954 master-2 kubenswrapper[4776]: I1011 10:35:07.969899 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:07.969954 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:07.969954 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:07.969954 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:07.970119 master-2 kubenswrapper[4776]: I1011 10:35:07.969983 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:08.031602 master-2 kubenswrapper[4776]: I1011 10:35:08.031501 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:08.031887 master-2 kubenswrapper[4776]: I1011 10:35:08.031647 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:08.970662 master-2 kubenswrapper[4776]: I1011 10:35:08.970576 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:08.970662 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:08.970662 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:08.970662 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:08.971177 master-2 kubenswrapper[4776]: I1011 10:35:08.970734 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:09.971078 master-2 kubenswrapper[4776]: I1011 10:35:09.971009 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:09.971078 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:09.971078 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:09.971078 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:09.971838 master-2 kubenswrapper[4776]: I1011 10:35:09.971079 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:10.970304 master-2 kubenswrapper[4776]: I1011 10:35:10.970190 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:10.970304 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:10.970304 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:10.970304 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:10.970304 master-2 kubenswrapper[4776]: I1011 10:35:10.970277 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:11.970367 master-2 kubenswrapper[4776]: I1011 10:35:11.970282 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:11.970367 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:11.970367 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:11.970367 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:11.971149 master-2 kubenswrapper[4776]: I1011 10:35:11.970358 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:12.970479 master-2 kubenswrapper[4776]: I1011 10:35:12.970379 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:12.970479 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:12.970479 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:12.970479 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:12.970479 master-2 kubenswrapper[4776]: I1011 10:35:12.970465 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:13.032160 master-2 kubenswrapper[4776]: I1011 10:35:13.032071 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:13.032160 master-2 kubenswrapper[4776]: I1011 10:35:13.032143 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:13.969557 master-2 kubenswrapper[4776]: I1011 10:35:13.969477 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:13.969557 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:13.969557 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:13.969557 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:13.969557 master-2 kubenswrapper[4776]: I1011 10:35:13.969550 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:14.970978 master-2 kubenswrapper[4776]: I1011 10:35:14.970897 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:14.970978 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:14.970978 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:14.970978 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:14.971825 master-2 kubenswrapper[4776]: I1011 10:35:14.971024 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:15.969621 master-2 kubenswrapper[4776]: I1011 10:35:15.969493 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:15.969621 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:15.969621 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:15.969621 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:15.970124 master-2 kubenswrapper[4776]: I1011 10:35:15.969626 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:16.971585 master-2 kubenswrapper[4776]: I1011 10:35:16.971476 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:16.971585 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:16.971585 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:16.971585 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:16.972387 master-2 kubenswrapper[4776]: I1011 10:35:16.971602 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:17.800533 master-2 kubenswrapper[4776]: I1011 10:35:17.800435 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:35:17.969712 master-2 kubenswrapper[4776]: I1011 10:35:17.969624 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:17.969712 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:17.969712 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:17.969712 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:17.970078 master-2 kubenswrapper[4776]: I1011 10:35:17.969812 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:18.031593 master-2 kubenswrapper[4776]: I1011 10:35:18.031510 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:18.032518 master-2 kubenswrapper[4776]: I1011 10:35:18.031597 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:18.059525 master-2 kubenswrapper[4776]: I1011 10:35:18.059346 4776 scope.go:117] "RemoveContainer" containerID="c8328096ff83e7ba5543b3018932260182e624ccc8f4652947efc0b12da9a115" Oct 11 10:35:18.059914 master-2 kubenswrapper[4776]: E1011 10:35:18.059859 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-766ddf4575-wf7mj_openshift-ingress-operator(6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c)\"" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" podUID="6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c" Oct 11 10:35:18.969353 master-2 kubenswrapper[4776]: I1011 10:35:18.969283 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:18.969353 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:18.969353 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:18.969353 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:18.969604 master-2 kubenswrapper[4776]: I1011 10:35:18.969381 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:19.969833 master-2 kubenswrapper[4776]: I1011 10:35:19.969745 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:19.969833 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:19.969833 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:19.969833 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:19.970722 master-2 kubenswrapper[4776]: I1011 10:35:19.969849 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:20.970490 master-2 kubenswrapper[4776]: I1011 10:35:20.970427 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:20.970490 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:20.970490 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:20.970490 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:20.970490 master-2 kubenswrapper[4776]: I1011 10:35:20.970491 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:21.780333 master-2 kubenswrapper[4776]: I1011 10:35:21.780260 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/2.log" Oct 11 10:35:21.780967 master-2 kubenswrapper[4776]: I1011 10:35:21.780928 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/1.log" Oct 11 10:35:21.781411 master-2 kubenswrapper[4776]: I1011 10:35:21.781371 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd-rev/0.log" Oct 11 10:35:21.783045 master-2 kubenswrapper[4776]: I1011 10:35:21.782984 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd-metrics/0.log" Oct 11 10:35:21.783636 master-2 kubenswrapper[4776]: I1011 10:35:21.783580 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcdctl/0.log" Oct 11 10:35:21.784378 master-2 kubenswrapper[4776]: I1011 10:35:21.784331 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/2.log" Oct 11 10:35:21.784890 master-2 kubenswrapper[4776]: I1011 10:35:21.784837 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd/1.log" Oct 11 10:35:21.785331 master-2 kubenswrapper[4776]: I1011 10:35:21.785272 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:21.785636 master-2 kubenswrapper[4776]: I1011 10:35:21.785569 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd-rev/0.log" Oct 11 10:35:21.787100 master-2 kubenswrapper[4776]: I1011 10:35:21.787055 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcd-metrics/0.log" Oct 11 10:35:21.787704 master-2 kubenswrapper[4776]: I1011 10:35:21.787654 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_c492168afa20f49cb6e3534e1871011b/etcdctl/0.log" Oct 11 10:35:21.789307 master-2 kubenswrapper[4776]: I1011 10:35:21.789262 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737" exitCode=137 Oct 11 10:35:21.789373 master-2 kubenswrapper[4776]: I1011 10:35:21.789304 4776 generic.go:334] "Generic (PLEG): container finished" podID="c492168afa20f49cb6e3534e1871011b" containerID="1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8" exitCode=137 Oct 11 10:35:21.789373 master-2 kubenswrapper[4776]: I1011 10:35:21.789364 4776 scope.go:117] "RemoveContainer" containerID="4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737" Oct 11 10:35:21.792655 master-2 kubenswrapper[4776]: I1011 10:35:21.792519 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-2" oldPodUID="c492168afa20f49cb6e3534e1871011b" podUID="2c4a583adfee975da84510940117e71a" Oct 11 10:35:21.808785 master-2 kubenswrapper[4776]: I1011 10:35:21.808750 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:35:21.836955 master-2 kubenswrapper[4776]: I1011 10:35:21.836904 4776 scope.go:117] "RemoveContainer" containerID="2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d" Oct 11 10:35:21.852400 master-2 kubenswrapper[4776]: I1011 10:35:21.852354 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-cert-dir\") pod \"c492168afa20f49cb6e3534e1871011b\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " Oct 11 10:35:21.852400 master-2 kubenswrapper[4776]: I1011 10:35:21.852400 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-usr-local-bin\") pod \"c492168afa20f49cb6e3534e1871011b\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " Oct 11 10:35:21.852633 master-2 kubenswrapper[4776]: I1011 10:35:21.852445 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-data-dir\") pod \"c492168afa20f49cb6e3534e1871011b\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " Oct 11 10:35:21.852633 master-2 kubenswrapper[4776]: I1011 10:35:21.852487 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-static-pod-dir\") pod \"c492168afa20f49cb6e3534e1871011b\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " Oct 11 10:35:21.852745 master-2 kubenswrapper[4776]: I1011 10:35:21.852657 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-log-dir\") pod \"c492168afa20f49cb6e3534e1871011b\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " Oct 11 10:35:21.852745 master-2 kubenswrapper[4776]: I1011 10:35:21.852697 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-resource-dir\") pod \"c492168afa20f49cb6e3534e1871011b\" (UID: \"c492168afa20f49cb6e3534e1871011b\") " Oct 11 10:35:21.852834 master-2 kubenswrapper[4776]: I1011 10:35:21.852791 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "c492168afa20f49cb6e3534e1871011b" (UID: "c492168afa20f49cb6e3534e1871011b"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:35:21.852834 master-2 kubenswrapper[4776]: I1011 10:35:21.852809 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-data-dir" (OuterVolumeSpecName: "data-dir") pod "c492168afa20f49cb6e3534e1871011b" (UID: "c492168afa20f49cb6e3534e1871011b"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:35:21.852979 master-2 kubenswrapper[4776]: I1011 10:35:21.852831 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "c492168afa20f49cb6e3534e1871011b" (UID: "c492168afa20f49cb6e3534e1871011b"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:35:21.852979 master-2 kubenswrapper[4776]: I1011 10:35:21.852847 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "c492168afa20f49cb6e3534e1871011b" (UID: "c492168afa20f49cb6e3534e1871011b"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:35:21.852979 master-2 kubenswrapper[4776]: I1011 10:35:21.852852 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-log-dir" (OuterVolumeSpecName: "log-dir") pod "c492168afa20f49cb6e3534e1871011b" (UID: "c492168afa20f49cb6e3534e1871011b"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:35:21.852979 master-2 kubenswrapper[4776]: I1011 10:35:21.852903 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "c492168afa20f49cb6e3534e1871011b" (UID: "c492168afa20f49cb6e3534e1871011b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:35:21.853130 master-2 kubenswrapper[4776]: I1011 10:35:21.852998 4776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-cert-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:21.853130 master-2 kubenswrapper[4776]: I1011 10:35:21.853013 4776 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-usr-local-bin\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:21.853130 master-2 kubenswrapper[4776]: I1011 10:35:21.853021 4776 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-data-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:21.853130 master-2 kubenswrapper[4776]: I1011 10:35:21.853030 4776 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-static-pod-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:21.853130 master-2 kubenswrapper[4776]: I1011 10:35:21.853038 4776 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-log-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:21.853130 master-2 kubenswrapper[4776]: I1011 10:35:21.853045 4776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c492168afa20f49cb6e3534e1871011b-resource-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:21.856304 master-2 kubenswrapper[4776]: I1011 10:35:21.856265 4776 scope.go:117] "RemoveContainer" containerID="352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7" Oct 11 10:35:21.874630 master-2 kubenswrapper[4776]: I1011 10:35:21.874585 4776 scope.go:117] "RemoveContainer" containerID="e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6" Oct 11 10:35:21.907389 master-2 kubenswrapper[4776]: I1011 10:35:21.907333 4776 scope.go:117] "RemoveContainer" containerID="1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8" Oct 11 10:35:21.923989 master-2 kubenswrapper[4776]: I1011 10:35:21.920628 4776 scope.go:117] "RemoveContainer" containerID="8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607" Oct 11 10:35:21.941609 master-2 kubenswrapper[4776]: I1011 10:35:21.941582 4776 scope.go:117] "RemoveContainer" containerID="fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9" Oct 11 10:35:21.961913 master-2 kubenswrapper[4776]: I1011 10:35:21.961871 4776 scope.go:117] "RemoveContainer" containerID="c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738" Oct 11 10:35:21.970347 master-2 kubenswrapper[4776]: I1011 10:35:21.970297 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:21.970347 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:21.970347 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:21.970347 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:21.971024 master-2 kubenswrapper[4776]: I1011 10:35:21.970362 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:21.981838 master-2 kubenswrapper[4776]: I1011 10:35:21.981803 4776 scope.go:117] "RemoveContainer" containerID="4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737" Oct 11 10:35:21.982285 master-2 kubenswrapper[4776]: E1011 10:35:21.982242 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737\": container with ID starting with 4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737 not found: ID does not exist" containerID="4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737" Oct 11 10:35:21.982285 master-2 kubenswrapper[4776]: I1011 10:35:21.982279 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737"} err="failed to get container status \"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737\": rpc error: code = NotFound desc = could not find container \"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737\": container with ID starting with 4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737 not found: ID does not exist" Oct 11 10:35:21.982417 master-2 kubenswrapper[4776]: I1011 10:35:21.982296 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:35:21.982726 master-2 kubenswrapper[4776]: E1011 10:35:21.982682 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc\": container with ID starting with 983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc not found: ID does not exist" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:35:21.982793 master-2 kubenswrapper[4776]: I1011 10:35:21.982728 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc"} err="failed to get container status \"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc\": rpc error: code = NotFound desc = could not find container \"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc\": container with ID starting with 983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc not found: ID does not exist" Oct 11 10:35:21.982793 master-2 kubenswrapper[4776]: I1011 10:35:21.982757 4776 scope.go:117] "RemoveContainer" containerID="2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d" Oct 11 10:35:21.983164 master-2 kubenswrapper[4776]: E1011 10:35:21.983132 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d\": container with ID starting with 2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d not found: ID does not exist" containerID="2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d" Oct 11 10:35:21.983164 master-2 kubenswrapper[4776]: I1011 10:35:21.983156 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d"} err="failed to get container status \"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d\": rpc error: code = NotFound desc = could not find container \"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d\": container with ID starting with 2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d not found: ID does not exist" Oct 11 10:35:21.983258 master-2 kubenswrapper[4776]: I1011 10:35:21.983172 4776 scope.go:117] "RemoveContainer" containerID="352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7" Oct 11 10:35:21.983533 master-2 kubenswrapper[4776]: E1011 10:35:21.983506 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7\": container with ID starting with 352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7 not found: ID does not exist" containerID="352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7" Oct 11 10:35:21.983533 master-2 kubenswrapper[4776]: I1011 10:35:21.983523 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7"} err="failed to get container status \"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7\": rpc error: code = NotFound desc = could not find container \"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7\": container with ID starting with 352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7 not found: ID does not exist" Oct 11 10:35:21.983533 master-2 kubenswrapper[4776]: I1011 10:35:21.983534 4776 scope.go:117] "RemoveContainer" containerID="e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6" Oct 11 10:35:21.983808 master-2 kubenswrapper[4776]: E1011 10:35:21.983776 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6\": container with ID starting with e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6 not found: ID does not exist" containerID="e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6" Oct 11 10:35:21.983872 master-2 kubenswrapper[4776]: I1011 10:35:21.983805 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6"} err="failed to get container status \"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6\": rpc error: code = NotFound desc = could not find container \"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6\": container with ID starting with e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6 not found: ID does not exist" Oct 11 10:35:21.983872 master-2 kubenswrapper[4776]: I1011 10:35:21.983825 4776 scope.go:117] "RemoveContainer" containerID="1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8" Oct 11 10:35:21.984096 master-2 kubenswrapper[4776]: E1011 10:35:21.984067 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8\": container with ID starting with 1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8 not found: ID does not exist" containerID="1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8" Oct 11 10:35:21.984096 master-2 kubenswrapper[4776]: I1011 10:35:21.984088 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8"} err="failed to get container status \"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8\": rpc error: code = NotFound desc = could not find container \"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8\": container with ID starting with 1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8 not found: ID does not exist" Oct 11 10:35:21.984185 master-2 kubenswrapper[4776]: I1011 10:35:21.984101 4776 scope.go:117] "RemoveContainer" containerID="8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607" Oct 11 10:35:21.984406 master-2 kubenswrapper[4776]: E1011 10:35:21.984375 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607\": container with ID starting with 8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607 not found: ID does not exist" containerID="8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607" Oct 11 10:35:21.984464 master-2 kubenswrapper[4776]: I1011 10:35:21.984401 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607"} err="failed to get container status \"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607\": rpc error: code = NotFound desc = could not find container \"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607\": container with ID starting with 8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607 not found: ID does not exist" Oct 11 10:35:21.984464 master-2 kubenswrapper[4776]: I1011 10:35:21.984416 4776 scope.go:117] "RemoveContainer" containerID="fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9" Oct 11 10:35:21.984725 master-2 kubenswrapper[4776]: E1011 10:35:21.984699 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9\": container with ID starting with fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9 not found: ID does not exist" containerID="fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9" Oct 11 10:35:21.984785 master-2 kubenswrapper[4776]: I1011 10:35:21.984725 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9"} err="failed to get container status \"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9\": rpc error: code = NotFound desc = could not find container \"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9\": container with ID starting with fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9 not found: ID does not exist" Oct 11 10:35:21.984785 master-2 kubenswrapper[4776]: I1011 10:35:21.984741 4776 scope.go:117] "RemoveContainer" containerID="c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738" Oct 11 10:35:21.985018 master-2 kubenswrapper[4776]: E1011 10:35:21.984987 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738\": container with ID starting with c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738 not found: ID does not exist" containerID="c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738" Oct 11 10:35:21.985018 master-2 kubenswrapper[4776]: I1011 10:35:21.985008 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738"} err="failed to get container status \"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738\": rpc error: code = NotFound desc = could not find container \"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738\": container with ID starting with c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738 not found: ID does not exist" Oct 11 10:35:21.985018 master-2 kubenswrapper[4776]: I1011 10:35:21.985019 4776 scope.go:117] "RemoveContainer" containerID="4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737" Oct 11 10:35:21.985264 master-2 kubenswrapper[4776]: I1011 10:35:21.985242 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737"} err="failed to get container status \"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737\": rpc error: code = NotFound desc = could not find container \"4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737\": container with ID starting with 4fd16eb13bcc24817fa1e10622c487d4af2244d176d92e75ca4478d4012c9737 not found: ID does not exist" Oct 11 10:35:21.985407 master-2 kubenswrapper[4776]: I1011 10:35:21.985262 4776 scope.go:117] "RemoveContainer" containerID="983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc" Oct 11 10:35:21.985524 master-2 kubenswrapper[4776]: I1011 10:35:21.985497 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc"} err="failed to get container status \"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc\": rpc error: code = NotFound desc = could not find container \"983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc\": container with ID starting with 983414f0a1374646b184f8ce997a68329b0e1c608fd1e9f90dd70e5a1a1493fc not found: ID does not exist" Oct 11 10:35:21.985524 master-2 kubenswrapper[4776]: I1011 10:35:21.985520 4776 scope.go:117] "RemoveContainer" containerID="2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d" Oct 11 10:35:21.985780 master-2 kubenswrapper[4776]: I1011 10:35:21.985754 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d"} err="failed to get container status \"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d\": rpc error: code = NotFound desc = could not find container \"2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d\": container with ID starting with 2ace7c1676116a561e461611b0ee18a1b9b34a8c0667ef13446af84903b93a5d not found: ID does not exist" Oct 11 10:35:21.985780 master-2 kubenswrapper[4776]: I1011 10:35:21.985777 4776 scope.go:117] "RemoveContainer" containerID="352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7" Oct 11 10:35:21.986027 master-2 kubenswrapper[4776]: I1011 10:35:21.986004 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7"} err="failed to get container status \"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7\": rpc error: code = NotFound desc = could not find container \"352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7\": container with ID starting with 352daa7b93ac85d51172d8082661c6993cf761d71b8b6ae418e9a08be7a529b7 not found: ID does not exist" Oct 11 10:35:21.986083 master-2 kubenswrapper[4776]: I1011 10:35:21.986026 4776 scope.go:117] "RemoveContainer" containerID="e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6" Oct 11 10:35:21.986251 master-2 kubenswrapper[4776]: I1011 10:35:21.986228 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6"} err="failed to get container status \"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6\": rpc error: code = NotFound desc = could not find container \"e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6\": container with ID starting with e8648786512f182b76d620be61834d6bf127edf01762b85991012f77492dcaa6 not found: ID does not exist" Oct 11 10:35:21.986251 master-2 kubenswrapper[4776]: I1011 10:35:21.986247 4776 scope.go:117] "RemoveContainer" containerID="1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8" Oct 11 10:35:21.986464 master-2 kubenswrapper[4776]: I1011 10:35:21.986443 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8"} err="failed to get container status \"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8\": rpc error: code = NotFound desc = could not find container \"1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8\": container with ID starting with 1a15c441f1baa1952fb10b8954ae003c1d84d3b70225126874f86951269561d8 not found: ID does not exist" Oct 11 10:35:21.986464 master-2 kubenswrapper[4776]: I1011 10:35:21.986461 4776 scope.go:117] "RemoveContainer" containerID="8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607" Oct 11 10:35:21.986717 master-2 kubenswrapper[4776]: I1011 10:35:21.986650 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607"} err="failed to get container status \"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607\": rpc error: code = NotFound desc = could not find container \"8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607\": container with ID starting with 8fe4aaa6a3993a289cd3c8502c5d6cf56c8e66503f6b0df9bc12870a2bd08607 not found: ID does not exist" Oct 11 10:35:21.986717 master-2 kubenswrapper[4776]: I1011 10:35:21.986692 4776 scope.go:117] "RemoveContainer" containerID="fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9" Oct 11 10:35:21.987090 master-2 kubenswrapper[4776]: I1011 10:35:21.987055 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9"} err="failed to get container status \"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9\": rpc error: code = NotFound desc = could not find container \"fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9\": container with ID starting with fe372b7f618f48a622d34790265ba3069867af7226c218e9eac4e0da880f78f9 not found: ID does not exist" Oct 11 10:35:21.987090 master-2 kubenswrapper[4776]: I1011 10:35:21.987075 4776 scope.go:117] "RemoveContainer" containerID="c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738" Oct 11 10:35:21.987345 master-2 kubenswrapper[4776]: I1011 10:35:21.987299 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738"} err="failed to get container status \"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738\": rpc error: code = NotFound desc = could not find container \"c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738\": container with ID starting with c8d9871d3e2f802f116df30f07a79e10c6128d9db239a2f755a77886407e1738 not found: ID does not exist" Oct 11 10:35:22.065317 master-2 kubenswrapper[4776]: I1011 10:35:22.065244 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c492168afa20f49cb6e3534e1871011b" path="/var/lib/kubelet/pods/c492168afa20f49cb6e3534e1871011b/volumes" Oct 11 10:35:22.795909 master-2 kubenswrapper[4776]: I1011 10:35:22.795863 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:22.802416 master-2 kubenswrapper[4776]: I1011 10:35:22.802333 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-2" oldPodUID="c492168afa20f49cb6e3534e1871011b" podUID="2c4a583adfee975da84510940117e71a" Oct 11 10:35:22.969268 master-2 kubenswrapper[4776]: I1011 10:35:22.969201 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:22.969268 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:22.969268 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:22.969268 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:22.969564 master-2 kubenswrapper[4776]: I1011 10:35:22.969282 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:23.032153 master-2 kubenswrapper[4776]: I1011 10:35:23.032057 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:23.032153 master-2 kubenswrapper[4776]: I1011 10:35:23.032126 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:23.970109 master-2 kubenswrapper[4776]: I1011 10:35:23.970021 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:23.970109 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:23.970109 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:23.970109 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:23.970599 master-2 kubenswrapper[4776]: I1011 10:35:23.970121 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:24.970461 master-2 kubenswrapper[4776]: I1011 10:35:24.970378 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:24.970461 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:24.970461 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:24.970461 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:24.971302 master-2 kubenswrapper[4776]: I1011 10:35:24.970483 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:25.969384 master-2 kubenswrapper[4776]: I1011 10:35:25.969319 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:25.969384 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:25.969384 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:25.969384 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:25.969384 master-2 kubenswrapper[4776]: I1011 10:35:25.969372 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:26.969104 master-2 kubenswrapper[4776]: I1011 10:35:26.969055 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:26.969104 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:26.969104 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:26.969104 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:26.969888 master-2 kubenswrapper[4776]: I1011 10:35:26.969113 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:27.970532 master-2 kubenswrapper[4776]: I1011 10:35:27.970426 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:27.970532 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:27.970532 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:27.970532 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:27.970532 master-2 kubenswrapper[4776]: I1011 10:35:27.970518 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:28.031621 master-2 kubenswrapper[4776]: I1011 10:35:28.031521 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:35:28.031621 master-2 kubenswrapper[4776]: I1011 10:35:28.031619 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:35:28.047724 master-2 kubenswrapper[4776]: I1011 10:35:28.047577 4776 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 10:35:28.969264 master-2 kubenswrapper[4776]: I1011 10:35:28.969185 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:28.969264 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:28.969264 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:28.969264 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:28.969628 master-2 kubenswrapper[4776]: I1011 10:35:28.969294 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:29.058529 master-2 kubenswrapper[4776]: I1011 10:35:29.058457 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:29.078197 master-2 kubenswrapper[4776]: I1011 10:35:29.078163 4776 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-2" podUID="bcf1681b-75de-4981-b8a1-447e616b2f7b" Oct 11 10:35:29.078346 master-2 kubenswrapper[4776]: I1011 10:35:29.078329 4776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-2" podUID="bcf1681b-75de-4981-b8a1-447e616b2f7b" Oct 11 10:35:29.099856 master-2 kubenswrapper[4776]: I1011 10:35:29.099800 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:35:29.102429 master-2 kubenswrapper[4776]: I1011 10:35:29.102358 4776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:29.111331 master-2 kubenswrapper[4776]: I1011 10:35:29.111245 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:35:29.126123 master-2 kubenswrapper[4776]: I1011 10:35:29.126078 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:29.129821 master-2 kubenswrapper[4776]: I1011 10:35:29.129789 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:35:29.146343 master-2 kubenswrapper[4776]: W1011 10:35:29.146297 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c4a583adfee975da84510940117e71a.slice/crio-1e7cfee8509ab14d62b205fdcccd1d761feda7516d2d81cde051d988a20a6538 WatchSource:0}: Error finding container 1e7cfee8509ab14d62b205fdcccd1d761feda7516d2d81cde051d988a20a6538: Status 404 returned error can't find the container with id 1e7cfee8509ab14d62b205fdcccd1d761feda7516d2d81cde051d988a20a6538 Oct 11 10:35:29.838997 master-2 kubenswrapper[4776]: I1011 10:35:29.838915 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="8498ac9ed169687a9469df6d265ee2510783d932551f6caa45673a37deb3682e" exitCode=0 Oct 11 10:35:29.838997 master-2 kubenswrapper[4776]: I1011 10:35:29.838971 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerDied","Data":"8498ac9ed169687a9469df6d265ee2510783d932551f6caa45673a37deb3682e"} Oct 11 10:35:29.838997 master-2 kubenswrapper[4776]: I1011 10:35:29.839003 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerStarted","Data":"1e7cfee8509ab14d62b205fdcccd1d761feda7516d2d81cde051d988a20a6538"} Oct 11 10:35:29.972053 master-2 kubenswrapper[4776]: I1011 10:35:29.972004 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:29.972053 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:29.972053 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:29.972053 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:29.972317 master-2 kubenswrapper[4776]: I1011 10:35:29.972064 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:30.846919 master-2 kubenswrapper[4776]: I1011 10:35:30.846879 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="8727285f17e12497f3cb86862360f5e6e70608ca5f775837d9eae36b1c220a0e" exitCode=0 Oct 11 10:35:30.847425 master-2 kubenswrapper[4776]: I1011 10:35:30.846920 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerDied","Data":"8727285f17e12497f3cb86862360f5e6e70608ca5f775837d9eae36b1c220a0e"} Oct 11 10:35:30.969351 master-2 kubenswrapper[4776]: I1011 10:35:30.969270 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:30.969351 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:30.969351 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:30.969351 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:30.969351 master-2 kubenswrapper[4776]: I1011 10:35:30.969344 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:31.073987 master-2 kubenswrapper[4776]: E1011 10:35:31.073950 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:31.074114 master-2 kubenswrapper[4776]: E1011 10:35:31.074002 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:35:31.573986525 +0000 UTC m=+566.358413234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:31.358417 master-2 kubenswrapper[4776]: I1011 10:35:31.358378 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-v6dfc_8757af56-20fb-439e-adba-7e4e50378936/assisted-installer-controller/0.log" Oct 11 10:35:31.580298 master-2 kubenswrapper[4776]: E1011 10:35:31.580227 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:31.580515 master-2 kubenswrapper[4776]: E1011 10:35:31.580315 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:35:32.580296854 +0000 UTC m=+567.364723563 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:31.855419 master-2 kubenswrapper[4776]: I1011 10:35:31.855356 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="7f66f4dfc685ae37f005fda864fb1584f27f6f6ea0f20644d46be5a7beee01cb" exitCode=0 Oct 11 10:35:31.855419 master-2 kubenswrapper[4776]: I1011 10:35:31.855398 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerDied","Data":"7f66f4dfc685ae37f005fda864fb1584f27f6f6ea0f20644d46be5a7beee01cb"} Oct 11 10:35:31.968589 master-2 kubenswrapper[4776]: I1011 10:35:31.968495 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:31.968589 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:31.968589 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:31.968589 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:31.968589 master-2 kubenswrapper[4776]: I1011 10:35:31.968585 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:32.593201 master-2 kubenswrapper[4776]: E1011 10:35:32.593162 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:32.593315 master-2 kubenswrapper[4776]: E1011 10:35:32.593231 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:35:34.593217189 +0000 UTC m=+569.377643898 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:32.867664 master-2 kubenswrapper[4776]: I1011 10:35:32.867568 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerStarted","Data":"8aca7dd04fbd9bc97f886a62f0850ed592b9776f6bcf8d57f228ba1b4d57e0dd"} Oct 11 10:35:32.867664 master-2 kubenswrapper[4776]: I1011 10:35:32.867655 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerStarted","Data":"431d1c1363285965b411f06e0338b448e40c4fef537351ea45fb00ac08129886"} Oct 11 10:35:32.868247 master-2 kubenswrapper[4776]: I1011 10:35:32.867715 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerStarted","Data":"56dc1b99eea54bd4bc4092f0e7a9e5c850ceefafdfda928c057fe6d1b40b5d1d"} Oct 11 10:35:32.868247 master-2 kubenswrapper[4776]: I1011 10:35:32.867739 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerStarted","Data":"8ca4916746dcde3d1a7ba8c08259545f440c11f186b53d82aba07a17030c92d1"} Oct 11 10:35:32.868247 master-2 kubenswrapper[4776]: I1011 10:35:32.867760 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"2c4a583adfee975da84510940117e71a","Type":"ContainerStarted","Data":"cc8943c5b4823b597a38ede8102a3e667afad877c11be87f804a4d9fcdbf5687"} Oct 11 10:35:32.919187 master-2 kubenswrapper[4776]: I1011 10:35:32.919100 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-2" podStartSLOduration=3.919043093 podStartE2EDuration="3.919043093s" podCreationTimestamp="2025-10-11 10:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:35:32.915723175 +0000 UTC m=+567.700149914" watchObservedRunningTime="2025-10-11 10:35:32.919043093 +0000 UTC m=+567.703469802" Oct 11 10:35:32.969127 master-2 kubenswrapper[4776]: I1011 10:35:32.969005 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:32.969127 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:32.969127 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:32.969127 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:32.969472 master-2 kubenswrapper[4776]: I1011 10:35:32.969169 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:33.058161 master-2 kubenswrapper[4776]: I1011 10:35:33.058027 4776 scope.go:117] "RemoveContainer" containerID="c8328096ff83e7ba5543b3018932260182e624ccc8f4652947efc0b12da9a115" Oct 11 10:35:33.877607 master-2 kubenswrapper[4776]: I1011 10:35:33.877544 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/2.log" Oct 11 10:35:33.878583 master-2 kubenswrapper[4776]: I1011 10:35:33.878539 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj" event={"ID":"6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c","Type":"ContainerStarted","Data":"8086a83d6fa23171a3f4677b881eaab20b411c82d7709f0eaf8a476e4028ed0e"} Oct 11 10:35:33.969920 master-2 kubenswrapper[4776]: I1011 10:35:33.969665 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:33.969920 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:33.969920 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:33.969920 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:33.969920 master-2 kubenswrapper[4776]: I1011 10:35:33.969764 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:34.132772 master-2 kubenswrapper[4776]: I1011 10:35:34.128083 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:34.640434 master-2 kubenswrapper[4776]: E1011 10:35:34.640318 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:34.640434 master-2 kubenswrapper[4776]: E1011 10:35:34.640448 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:35:38.640423065 +0000 UTC m=+573.424849984 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:34.969906 master-2 kubenswrapper[4776]: I1011 10:35:34.969762 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:34.969906 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:34.969906 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:34.969906 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:34.971237 master-2 kubenswrapper[4776]: I1011 10:35:34.969947 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:35.970262 master-2 kubenswrapper[4776]: I1011 10:35:35.970171 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:35.970262 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:35.970262 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:35.970262 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:35.971276 master-2 kubenswrapper[4776]: I1011 10:35:35.970293 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:36.969999 master-2 kubenswrapper[4776]: I1011 10:35:36.969882 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:36.969999 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:36.969999 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:36.969999 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:36.969999 master-2 kubenswrapper[4776]: I1011 10:35:36.969965 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:37.969594 master-2 kubenswrapper[4776]: I1011 10:35:37.969531 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:37.969594 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:37.969594 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:37.969594 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:37.969594 master-2 kubenswrapper[4776]: I1011 10:35:37.969587 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:38.032037 master-2 kubenswrapper[4776]: I1011 10:35:38.031927 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:35:38.032860 master-2 kubenswrapper[4776]: I1011 10:35:38.032031 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:35:38.697810 master-2 kubenswrapper[4776]: E1011 10:35:38.697735 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:38.698203 master-2 kubenswrapper[4776]: E1011 10:35:38.697868 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:35:46.697834088 +0000 UTC m=+581.482260847 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:38.970357 master-2 kubenswrapper[4776]: I1011 10:35:38.970150 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:38.970357 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:38.970357 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:38.970357 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:38.970357 master-2 kubenswrapper[4776]: I1011 10:35:38.970303 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:39.128277 master-2 kubenswrapper[4776]: I1011 10:35:39.128188 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:39.546447 master-2 kubenswrapper[4776]: I1011 10:35:39.546365 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-546b64dc7b-pdhmc"] Oct 11 10:35:39.546971 master-2 kubenswrapper[4776]: E1011 10:35:39.546938 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" podUID="cacd2d60-e8a5-450f-a4ad-dfc0194e3325" Oct 11 10:35:39.585750 master-2 kubenswrapper[4776]: I1011 10:35:39.585645 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb"] Oct 11 10:35:39.586283 master-2 kubenswrapper[4776]: E1011 10:35:39.586192 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" podUID="17bef070-1a9d-4090-b97a-7ce2c1c93b19" Oct 11 10:35:39.918102 master-2 kubenswrapper[4776]: I1011 10:35:39.918043 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:35:39.918314 master-2 kubenswrapper[4776]: I1011 10:35:39.918043 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:35:39.924716 master-2 kubenswrapper[4776]: I1011 10:35:39.924688 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:35:39.930304 master-2 kubenswrapper[4776]: I1011 10:35:39.930274 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:35:39.970557 master-2 kubenswrapper[4776]: I1011 10:35:39.970501 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:39.970557 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:39.970557 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:39.970557 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:39.970884 master-2 kubenswrapper[4776]: I1011 10:35:39.970563 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:40.115958 master-2 kubenswrapper[4776]: I1011 10:35:40.115867 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") pod \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " Oct 11 10:35:40.115958 master-2 kubenswrapper[4776]: I1011 10:35:40.115936 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjjzl\" (UniqueName: \"kubernetes.io/projected/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-kube-api-access-tjjzl\") pod \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " Oct 11 10:35:40.115958 master-2 kubenswrapper[4776]: I1011 10:35:40.115954 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-config\") pod \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " Oct 11 10:35:40.116427 master-2 kubenswrapper[4776]: I1011 10:35:40.116006 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sqbz\" (UniqueName: \"kubernetes.io/projected/17bef070-1a9d-4090-b97a-7ce2c1c93b19-kube-api-access-9sqbz\") pod \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " Oct 11 10:35:40.116427 master-2 kubenswrapper[4776]: I1011 10:35:40.116024 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-proxy-ca-bundles\") pod \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\" (UID: \"cacd2d60-e8a5-450f-a4ad-dfc0194e3325\") " Oct 11 10:35:40.116427 master-2 kubenswrapper[4776]: I1011 10:35:40.116048 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") pod \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " Oct 11 10:35:40.116427 master-2 kubenswrapper[4776]: I1011 10:35:40.116099 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-config\") pod \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\" (UID: \"17bef070-1a9d-4090-b97a-7ce2c1c93b19\") " Oct 11 10:35:40.116838 master-2 kubenswrapper[4776]: I1011 10:35:40.116536 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cacd2d60-e8a5-450f-a4ad-dfc0194e3325" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:40.116838 master-2 kubenswrapper[4776]: I1011 10:35:40.116608 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-config" (OuterVolumeSpecName: "config") pod "cacd2d60-e8a5-450f-a4ad-dfc0194e3325" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:40.116838 master-2 kubenswrapper[4776]: I1011 10:35:40.116616 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-config" (OuterVolumeSpecName: "config") pod "17bef070-1a9d-4090-b97a-7ce2c1c93b19" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:40.118978 master-2 kubenswrapper[4776]: I1011 10:35:40.118942 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17bef070-1a9d-4090-b97a-7ce2c1c93b19-kube-api-access-9sqbz" (OuterVolumeSpecName: "kube-api-access-9sqbz") pod "17bef070-1a9d-4090-b97a-7ce2c1c93b19" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19"). InnerVolumeSpecName "kube-api-access-9sqbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:40.119054 master-2 kubenswrapper[4776]: I1011 10:35:40.119031 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17bef070-1a9d-4090-b97a-7ce2c1c93b19" (UID: "17bef070-1a9d-4090-b97a-7ce2c1c93b19"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:35:40.119563 master-2 kubenswrapper[4776]: I1011 10:35:40.119518 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-kube-api-access-tjjzl" (OuterVolumeSpecName: "kube-api-access-tjjzl") pod "cacd2d60-e8a5-450f-a4ad-dfc0194e3325" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325"). InnerVolumeSpecName "kube-api-access-tjjzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:40.119771 master-2 kubenswrapper[4776]: I1011 10:35:40.119733 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cacd2d60-e8a5-450f-a4ad-dfc0194e3325" (UID: "cacd2d60-e8a5-450f-a4ad-dfc0194e3325"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217892 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjjzl\" (UniqueName: \"kubernetes.io/projected/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-kube-api-access-tjjzl\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217931 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217941 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sqbz\" (UniqueName: \"kubernetes.io/projected/17bef070-1a9d-4090-b97a-7ce2c1c93b19-kube-api-access-9sqbz\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217950 4776 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-proxy-ca-bundles\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217960 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17bef070-1a9d-4090-b97a-7ce2c1c93b19-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217968 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.218018 master-2 kubenswrapper[4776]: I1011 10:35:40.217977 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:40.924440 master-2 kubenswrapper[4776]: I1011 10:35:40.924350 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb" Oct 11 10:35:40.924440 master-2 kubenswrapper[4776]: I1011 10:35:40.924387 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546b64dc7b-pdhmc" Oct 11 10:35:40.970096 master-2 kubenswrapper[4776]: I1011 10:35:40.970006 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:40.970096 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:40.970096 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:40.970096 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:40.970096 master-2 kubenswrapper[4776]: I1011 10:35:40.970069 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:40.984506 master-2 kubenswrapper[4776]: I1011 10:35:40.984424 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb"] Oct 11 10:35:40.990346 master-2 kubenswrapper[4776]: I1011 10:35:40.990273 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv"] Oct 11 10:35:40.990770 master-2 kubenswrapper[4776]: E1011 10:35:40.990731 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebeec22d-9309-4efd-bbc0-f44c750a258c" containerName="installer" Oct 11 10:35:40.990830 master-2 kubenswrapper[4776]: I1011 10:35:40.990775 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebeec22d-9309-4efd-bbc0-f44c750a258c" containerName="installer" Oct 11 10:35:40.991052 master-2 kubenswrapper[4776]: I1011 10:35:40.990999 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebeec22d-9309-4efd-bbc0-f44c750a258c" containerName="installer" Oct 11 10:35:40.991788 master-2 kubenswrapper[4776]: I1011 10:35:40.991746 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:40.995006 master-2 kubenswrapper[4776]: I1011 10:35:40.994966 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:35:40.995006 master-2 kubenswrapper[4776]: I1011 10:35:40.994975 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:35:40.995328 master-2 kubenswrapper[4776]: I1011 10:35:40.995261 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:35:40.995328 master-2 kubenswrapper[4776]: I1011 10:35:40.995301 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:35:40.995484 master-2 kubenswrapper[4776]: I1011 10:35:40.995346 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:35:41.003873 master-2 kubenswrapper[4776]: I1011 10:35:40.997242 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb"] Oct 11 10:35:41.007228 master-2 kubenswrapper[4776]: I1011 10:35:41.007199 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv"] Oct 11 10:35:41.022528 master-2 kubenswrapper[4776]: I1011 10:35:41.022481 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-546b64dc7b-pdhmc"] Oct 11 10:35:41.026327 master-2 kubenswrapper[4776]: I1011 10:35:41.026287 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-serving-cert\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.026559 master-2 kubenswrapper[4776]: I1011 10:35:41.026522 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-config\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.026693 master-2 kubenswrapper[4776]: I1011 10:35:41.026650 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-client-ca\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.026811 master-2 kubenswrapper[4776]: I1011 10:35:41.026781 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccpmn\" (UniqueName: \"kubernetes.io/projected/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-kube-api-access-ccpmn\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.026921 master-2 kubenswrapper[4776]: I1011 10:35:41.026897 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17bef070-1a9d-4090-b97a-7ce2c1c93b19-client-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:41.027127 master-2 kubenswrapper[4776]: I1011 10:35:41.027101 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-546b64dc7b-pdhmc"] Oct 11 10:35:41.128393 master-2 kubenswrapper[4776]: I1011 10:35:41.128337 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-config\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.128393 master-2 kubenswrapper[4776]: I1011 10:35:41.128399 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-client-ca\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.128654 master-2 kubenswrapper[4776]: I1011 10:35:41.128429 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccpmn\" (UniqueName: \"kubernetes.io/projected/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-kube-api-access-ccpmn\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.128654 master-2 kubenswrapper[4776]: I1011 10:35:41.128461 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-serving-cert\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.128654 master-2 kubenswrapper[4776]: I1011 10:35:41.128515 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cacd2d60-e8a5-450f-a4ad-dfc0194e3325-client-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:41.130010 master-2 kubenswrapper[4776]: I1011 10:35:41.129961 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-config\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.130364 master-2 kubenswrapper[4776]: I1011 10:35:41.130308 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-client-ca\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.136543 master-2 kubenswrapper[4776]: I1011 10:35:41.136501 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-serving-cert\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.150560 master-2 kubenswrapper[4776]: I1011 10:35:41.150520 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccpmn\" (UniqueName: \"kubernetes.io/projected/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-kube-api-access-ccpmn\") pod \"route-controller-manager-7966cd474-whtvv\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.320122 master-2 kubenswrapper[4776]: I1011 10:35:41.319949 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:41.688232 master-2 kubenswrapper[4776]: I1011 10:35:41.688164 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv"] Oct 11 10:35:41.692006 master-2 kubenswrapper[4776]: W1011 10:35:41.691944 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa039e2d_e3c6_47a6_ad16_9f189e5a70e7.slice/crio-af9d02f52a1563e3017910d20a929f11b45fd257a3dc461ad875648500138f09 WatchSource:0}: Error finding container af9d02f52a1563e3017910d20a929f11b45fd257a3dc461ad875648500138f09: Status 404 returned error can't find the container with id af9d02f52a1563e3017910d20a929f11b45fd257a3dc461ad875648500138f09 Oct 11 10:35:41.934978 master-2 kubenswrapper[4776]: I1011 10:35:41.934912 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" event={"ID":"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7","Type":"ContainerStarted","Data":"af9d02f52a1563e3017910d20a929f11b45fd257a3dc461ad875648500138f09"} Oct 11 10:35:41.969721 master-2 kubenswrapper[4776]: I1011 10:35:41.969574 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:41.969721 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:41.969721 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:41.969721 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:41.969721 master-2 kubenswrapper[4776]: I1011 10:35:41.969655 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:42.064559 master-2 kubenswrapper[4776]: I1011 10:35:42.064486 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17bef070-1a9d-4090-b97a-7ce2c1c93b19" path="/var/lib/kubelet/pods/17bef070-1a9d-4090-b97a-7ce2c1c93b19/volumes" Oct 11 10:35:42.064949 master-2 kubenswrapper[4776]: I1011 10:35:42.064909 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cacd2d60-e8a5-450f-a4ad-dfc0194e3325" path="/var/lib/kubelet/pods/cacd2d60-e8a5-450f-a4ad-dfc0194e3325/volumes" Oct 11 10:35:42.969172 master-2 kubenswrapper[4776]: I1011 10:35:42.969114 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:42.969172 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:42.969172 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:42.969172 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:42.969172 master-2 kubenswrapper[4776]: I1011 10:35:42.969172 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:43.032361 master-2 kubenswrapper[4776]: I1011 10:35:43.032316 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 10:35:43.032594 master-2 kubenswrapper[4776]: I1011 10:35:43.032380 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:35:43.198935 master-2 kubenswrapper[4776]: I1011 10:35:43.198881 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:35:43.954351 master-2 kubenswrapper[4776]: I1011 10:35:43.954138 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-l7mc2"] Oct 11 10:35:43.955184 master-2 kubenswrapper[4776]: I1011 10:35:43.954967 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:43.958703 master-2 kubenswrapper[4776]: I1011 10:35:43.958549 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:35:43.958842 master-2 kubenswrapper[4776]: I1011 10:35:43.958636 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:35:43.958925 master-2 kubenswrapper[4776]: I1011 10:35:43.958654 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:35:43.960330 master-2 kubenswrapper[4776]: I1011 10:35:43.960297 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:35:43.960550 master-2 kubenswrapper[4776]: I1011 10:35:43.960406 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:35:43.962957 master-2 kubenswrapper[4776]: I1011 10:35:43.962913 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-l7mc2"] Oct 11 10:35:43.964979 master-2 kubenswrapper[4776]: I1011 10:35:43.964920 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mmxs\" (UniqueName: \"kubernetes.io/projected/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-kube-api-access-8mmxs\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:43.965125 master-2 kubenswrapper[4776]: I1011 10:35:43.965091 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-client-ca\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:43.965247 master-2 kubenswrapper[4776]: I1011 10:35:43.965217 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-serving-cert\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:43.965287 master-2 kubenswrapper[4776]: I1011 10:35:43.965260 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-config\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:43.965405 master-2 kubenswrapper[4776]: I1011 10:35:43.965363 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-proxy-ca-bundles\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:43.969346 master-2 kubenswrapper[4776]: I1011 10:35:43.969310 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:43.969346 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:43.969346 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:43.969346 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:43.975096 master-2 kubenswrapper[4776]: I1011 10:35:43.969359 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:43.975096 master-2 kubenswrapper[4776]: I1011 10:35:43.970903 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:35:44.066772 master-2 kubenswrapper[4776]: I1011 10:35:44.066721 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-proxy-ca-bundles\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.066967 master-2 kubenswrapper[4776]: I1011 10:35:44.066803 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mmxs\" (UniqueName: \"kubernetes.io/projected/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-kube-api-access-8mmxs\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.066967 master-2 kubenswrapper[4776]: I1011 10:35:44.066853 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-client-ca\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.066967 master-2 kubenswrapper[4776]: I1011 10:35:44.066906 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-serving-cert\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.066967 master-2 kubenswrapper[4776]: I1011 10:35:44.066929 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-config\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.068394 master-2 kubenswrapper[4776]: I1011 10:35:44.068369 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-config\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.068707 master-2 kubenswrapper[4776]: I1011 10:35:44.068656 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-proxy-ca-bundles\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.069312 master-2 kubenswrapper[4776]: I1011 10:35:44.069282 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-client-ca\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.072523 master-2 kubenswrapper[4776]: I1011 10:35:44.072410 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-serving-cert\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.097228 master-2 kubenswrapper[4776]: I1011 10:35:44.097180 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mmxs\" (UniqueName: \"kubernetes.io/projected/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-kube-api-access-8mmxs\") pod \"controller-manager-77c7855cb4-l7mc2\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.294354 master-2 kubenswrapper[4776]: I1011 10:35:44.294301 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:44.716344 master-2 kubenswrapper[4776]: I1011 10:35:44.716287 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-l7mc2"] Oct 11 10:35:44.724602 master-2 kubenswrapper[4776]: W1011 10:35:44.724552 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01d9e8ba_5d12_4d58_8db6_dbbea31c4df1.slice/crio-bc433b79c55289ef4a64be58481abb43a7688c174b82b2fdfa0d85577d07edb1 WatchSource:0}: Error finding container bc433b79c55289ef4a64be58481abb43a7688c174b82b2fdfa0d85577d07edb1: Status 404 returned error can't find the container with id bc433b79c55289ef4a64be58481abb43a7688c174b82b2fdfa0d85577d07edb1 Oct 11 10:35:44.963312 master-2 kubenswrapper[4776]: I1011 10:35:44.963248 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" event={"ID":"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7","Type":"ContainerStarted","Data":"6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49"} Oct 11 10:35:44.963587 master-2 kubenswrapper[4776]: I1011 10:35:44.963532 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:44.964331 master-2 kubenswrapper[4776]: I1011 10:35:44.964303 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" event={"ID":"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1","Type":"ContainerStarted","Data":"bc433b79c55289ef4a64be58481abb43a7688c174b82b2fdfa0d85577d07edb1"} Oct 11 10:35:44.968745 master-2 kubenswrapper[4776]: I1011 10:35:44.968647 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:44.969304 master-2 kubenswrapper[4776]: I1011 10:35:44.969268 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:44.969304 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:44.969304 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:44.969304 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:44.969715 master-2 kubenswrapper[4776]: I1011 10:35:44.969316 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:44.980793 master-2 kubenswrapper[4776]: I1011 10:35:44.980720 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" podStartSLOduration=3.807172043 podStartE2EDuration="5.980703505s" podCreationTimestamp="2025-10-11 10:35:39 +0000 UTC" firstStartedPulling="2025-10-11 10:35:41.696022064 +0000 UTC m=+576.480448783" lastFinishedPulling="2025-10-11 10:35:43.869553536 +0000 UTC m=+578.653980245" observedRunningTime="2025-10-11 10:35:44.97827789 +0000 UTC m=+579.762704639" watchObservedRunningTime="2025-10-11 10:35:44.980703505 +0000 UTC m=+579.765130214" Oct 11 10:35:45.970285 master-2 kubenswrapper[4776]: I1011 10:35:45.970206 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:45.970285 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:45.970285 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:45.970285 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:45.970860 master-2 kubenswrapper[4776]: I1011 10:35:45.970305 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:46.700856 master-2 kubenswrapper[4776]: E1011 10:35:46.700814 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:46.701174 master-2 kubenswrapper[4776]: E1011 10:35:46.700906 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:36:02.700883215 +0000 UTC m=+597.485309974 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:35:46.969445 master-2 kubenswrapper[4776]: I1011 10:35:46.969309 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:46.969445 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:46.969445 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:46.969445 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:46.969445 master-2 kubenswrapper[4776]: I1011 10:35:46.969396 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:47.796899 master-2 kubenswrapper[4776]: I1011 10:35:47.796838 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-tpjwk_3594e65d-a9cb-4d12-b4cd-88229b18abdc/machine-config-server/0.log" Oct 11 10:35:47.969058 master-2 kubenswrapper[4776]: I1011 10:35:47.969007 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:47.969058 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:47.969058 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:47.969058 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:47.969321 master-2 kubenswrapper[4776]: I1011 10:35:47.969069 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:47.984242 master-2 kubenswrapper[4776]: I1011 10:35:47.984197 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" event={"ID":"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1","Type":"ContainerStarted","Data":"15dc2b1cfde423b619736343b47ca9dd39eca021477b309bd20f1ac3429f0eac"} Oct 11 10:35:47.984695 master-2 kubenswrapper[4776]: I1011 10:35:47.984636 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:47.990808 master-2 kubenswrapper[4776]: I1011 10:35:47.990785 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:35:48.007379 master-2 kubenswrapper[4776]: I1011 10:35:48.007258 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" podStartSLOduration=6.394584964 podStartE2EDuration="9.007224289s" podCreationTimestamp="2025-10-11 10:35:39 +0000 UTC" firstStartedPulling="2025-10-11 10:35:44.727096339 +0000 UTC m=+579.511523048" lastFinishedPulling="2025-10-11 10:35:47.339735664 +0000 UTC m=+582.124162373" observedRunningTime="2025-10-11 10:35:48.002827663 +0000 UTC m=+582.787254372" watchObservedRunningTime="2025-10-11 10:35:48.007224289 +0000 UTC m=+582.791650998" Oct 11 10:35:48.969612 master-2 kubenswrapper[4776]: I1011 10:35:48.969563 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:48.969612 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:48.969612 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:48.969612 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:48.970331 master-2 kubenswrapper[4776]: I1011 10:35:48.969639 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:49.030175 master-2 kubenswrapper[4776]: I1011 10:35:49.030114 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwqr6"] Oct 11 10:35:49.030409 master-2 kubenswrapper[4776]: I1011 10:35:49.030350 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mwqr6" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="registry-server" containerID="cri-o://79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a" gracePeriod=2 Oct 11 10:35:49.138978 master-2 kubenswrapper[4776]: I1011 10:35:49.138934 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:49.153829 master-2 kubenswrapper[4776]: I1011 10:35:49.153792 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-2" Oct 11 10:35:49.386915 master-2 kubenswrapper[4776]: I1011 10:35:49.386815 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv"] Oct 11 10:35:49.387174 master-2 kubenswrapper[4776]: I1011 10:35:49.387126 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" podUID="fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" containerName="route-controller-manager" containerID="cri-o://6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49" gracePeriod=30 Oct 11 10:35:49.454910 master-2 kubenswrapper[4776]: I1011 10:35:49.454441 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xtrbk"] Oct 11 10:35:49.455534 master-2 kubenswrapper[4776]: I1011 10:35:49.455514 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.461928 master-2 kubenswrapper[4776]: I1011 10:35:49.461873 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-5v5km" Oct 11 10:35:49.509418 master-2 kubenswrapper[4776]: I1011 10:35:49.509282 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xtrbk"] Oct 11 10:35:49.541886 master-2 kubenswrapper[4776]: I1011 10:35:49.541772 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afe0068-3c97-4916-ba53-53f2841a95b0-catalog-content\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.542588 master-2 kubenswrapper[4776]: I1011 10:35:49.542270 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbhj6\" (UniqueName: \"kubernetes.io/projected/1afe0068-3c97-4916-ba53-53f2841a95b0-kube-api-access-nbhj6\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.542588 master-2 kubenswrapper[4776]: I1011 10:35:49.542359 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afe0068-3c97-4916-ba53-53f2841a95b0-utilities\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.579731 master-2 kubenswrapper[4776]: I1011 10:35:49.579209 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:35:49.644412 master-2 kubenswrapper[4776]: I1011 10:35:49.644320 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tlh8\" (UniqueName: \"kubernetes.io/projected/444ea5b2-c9dc-4685-9f66-2273b30d9045-kube-api-access-7tlh8\") pod \"444ea5b2-c9dc-4685-9f66-2273b30d9045\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " Oct 11 10:35:49.644412 master-2 kubenswrapper[4776]: I1011 10:35:49.644392 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-utilities\") pod \"444ea5b2-c9dc-4685-9f66-2273b30d9045\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " Oct 11 10:35:49.644704 master-2 kubenswrapper[4776]: I1011 10:35:49.644448 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-catalog-content\") pod \"444ea5b2-c9dc-4685-9f66-2273b30d9045\" (UID: \"444ea5b2-c9dc-4685-9f66-2273b30d9045\") " Oct 11 10:35:49.644704 master-2 kubenswrapper[4776]: I1011 10:35:49.644625 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afe0068-3c97-4916-ba53-53f2841a95b0-catalog-content\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.644842 master-2 kubenswrapper[4776]: I1011 10:35:49.644811 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbhj6\" (UniqueName: \"kubernetes.io/projected/1afe0068-3c97-4916-ba53-53f2841a95b0-kube-api-access-nbhj6\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.644842 master-2 kubenswrapper[4776]: I1011 10:35:49.644838 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afe0068-3c97-4916-ba53-53f2841a95b0-utilities\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.645316 master-2 kubenswrapper[4776]: I1011 10:35:49.645291 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afe0068-3c97-4916-ba53-53f2841a95b0-utilities\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.646397 master-2 kubenswrapper[4776]: I1011 10:35:49.646023 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afe0068-3c97-4916-ba53-53f2841a95b0-catalog-content\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.647361 master-2 kubenswrapper[4776]: I1011 10:35:49.647308 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-utilities" (OuterVolumeSpecName: "utilities") pod "444ea5b2-c9dc-4685-9f66-2273b30d9045" (UID: "444ea5b2-c9dc-4685-9f66-2273b30d9045"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:35:49.658793 master-2 kubenswrapper[4776]: I1011 10:35:49.658450 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444ea5b2-c9dc-4685-9f66-2273b30d9045-kube-api-access-7tlh8" (OuterVolumeSpecName: "kube-api-access-7tlh8") pod "444ea5b2-c9dc-4685-9f66-2273b30d9045" (UID: "444ea5b2-c9dc-4685-9f66-2273b30d9045"). InnerVolumeSpecName "kube-api-access-7tlh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:49.670698 master-2 kubenswrapper[4776]: I1011 10:35:49.670628 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbhj6\" (UniqueName: \"kubernetes.io/projected/1afe0068-3c97-4916-ba53-53f2841a95b0-kube-api-access-nbhj6\") pod \"certified-operators-xtrbk\" (UID: \"1afe0068-3c97-4916-ba53-53f2841a95b0\") " pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.724548 master-2 kubenswrapper[4776]: I1011 10:35:49.724476 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "444ea5b2-c9dc-4685-9f66-2273b30d9045" (UID: "444ea5b2-c9dc-4685-9f66-2273b30d9045"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:35:49.747725 master-2 kubenswrapper[4776]: I1011 10:35:49.747087 4776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-catalog-content\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.747725 master-2 kubenswrapper[4776]: I1011 10:35:49.747133 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tlh8\" (UniqueName: \"kubernetes.io/projected/444ea5b2-c9dc-4685-9f66-2273b30d9045-kube-api-access-7tlh8\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.747725 master-2 kubenswrapper[4776]: I1011 10:35:49.747150 4776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444ea5b2-c9dc-4685-9f66-2273b30d9045-utilities\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.778671 master-2 kubenswrapper[4776]: I1011 10:35:49.778628 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:49.847796 master-2 kubenswrapper[4776]: I1011 10:35:49.847735 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-serving-cert\") pod \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " Oct 11 10:35:49.847796 master-2 kubenswrapper[4776]: I1011 10:35:49.847781 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-config\") pod \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " Oct 11 10:35:49.848048 master-2 kubenswrapper[4776]: I1011 10:35:49.847834 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccpmn\" (UniqueName: \"kubernetes.io/projected/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-kube-api-access-ccpmn\") pod \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " Oct 11 10:35:49.848048 master-2 kubenswrapper[4776]: I1011 10:35:49.847887 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-client-ca\") pod \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\" (UID: \"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7\") " Oct 11 10:35:49.848542 master-2 kubenswrapper[4776]: I1011 10:35:49.848512 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-client-ca" (OuterVolumeSpecName: "client-ca") pod "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" (UID: "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:49.848664 master-2 kubenswrapper[4776]: I1011 10:35:49.848599 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-config" (OuterVolumeSpecName: "config") pod "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" (UID: "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:35:49.850603 master-2 kubenswrapper[4776]: I1011 10:35:49.850516 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" (UID: "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:35:49.851747 master-2 kubenswrapper[4776]: I1011 10:35:49.851700 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-kube-api-access-ccpmn" (OuterVolumeSpecName: "kube-api-access-ccpmn") pod "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" (UID: "fa039e2d-e3c6-47a6-ad16-9f189e5a70e7"). InnerVolumeSpecName "kube-api-access-ccpmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:35:49.901082 master-2 kubenswrapper[4776]: I1011 10:35:49.901007 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:49.949584 master-2 kubenswrapper[4776]: I1011 10:35:49.949532 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.949584 master-2 kubenswrapper[4776]: I1011 10:35:49.949572 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.949584 master-2 kubenswrapper[4776]: I1011 10:35:49.949587 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccpmn\" (UniqueName: \"kubernetes.io/projected/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-kube-api-access-ccpmn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.949584 master-2 kubenswrapper[4776]: I1011 10:35:49.949599 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7-client-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:35:49.970014 master-2 kubenswrapper[4776]: I1011 10:35:49.969946 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:49.970014 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:49.970014 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:49.970014 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:49.970563 master-2 kubenswrapper[4776]: I1011 10:35:49.970016 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:50.002172 master-2 kubenswrapper[4776]: I1011 10:35:50.002114 4776 generic.go:334] "Generic (PLEG): container finished" podID="fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" containerID="6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49" exitCode=0 Oct 11 10:35:50.002354 master-2 kubenswrapper[4776]: I1011 10:35:50.002189 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" event={"ID":"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7","Type":"ContainerDied","Data":"6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49"} Oct 11 10:35:50.002354 master-2 kubenswrapper[4776]: I1011 10:35:50.002217 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" event={"ID":"fa039e2d-e3c6-47a6-ad16-9f189e5a70e7","Type":"ContainerDied","Data":"af9d02f52a1563e3017910d20a929f11b45fd257a3dc461ad875648500138f09"} Oct 11 10:35:50.002354 master-2 kubenswrapper[4776]: I1011 10:35:50.002256 4776 scope.go:117] "RemoveContainer" containerID="6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49" Oct 11 10:35:50.002354 master-2 kubenswrapper[4776]: I1011 10:35:50.002354 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv" Oct 11 10:35:50.010717 master-2 kubenswrapper[4776]: I1011 10:35:50.010522 4776 generic.go:334] "Generic (PLEG): container finished" podID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerID="79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a" exitCode=0 Oct 11 10:35:50.010717 master-2 kubenswrapper[4776]: I1011 10:35:50.010590 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwqr6" Oct 11 10:35:50.010717 master-2 kubenswrapper[4776]: I1011 10:35:50.010587 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerDied","Data":"79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a"} Oct 11 10:35:50.010875 master-2 kubenswrapper[4776]: I1011 10:35:50.010753 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwqr6" event={"ID":"444ea5b2-c9dc-4685-9f66-2273b30d9045","Type":"ContainerDied","Data":"3de15bd009d5969b3f80470fb7549e4f068bb4b317e68ee93b70421988c245b5"} Oct 11 10:35:50.038711 master-2 kubenswrapper[4776]: I1011 10:35:50.035902 4776 scope.go:117] "RemoveContainer" containerID="6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49" Oct 11 10:35:50.038711 master-2 kubenswrapper[4776]: E1011 10:35:50.036888 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49\": container with ID starting with 6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49 not found: ID does not exist" containerID="6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49" Oct 11 10:35:50.038711 master-2 kubenswrapper[4776]: I1011 10:35:50.036927 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49"} err="failed to get container status \"6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49\": rpc error: code = NotFound desc = could not find container \"6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49\": container with ID starting with 6449227b0f478ecc7d536677754f75137fe2ac56a0b2f56edf6447695a1c4c49 not found: ID does not exist" Oct 11 10:35:50.038711 master-2 kubenswrapper[4776]: I1011 10:35:50.036953 4776 scope.go:117] "RemoveContainer" containerID="79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a" Oct 11 10:35:50.040638 master-2 kubenswrapper[4776]: I1011 10:35:50.040604 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv"] Oct 11 10:35:50.048474 master-2 kubenswrapper[4776]: I1011 10:35:50.048386 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv"] Oct 11 10:35:50.067711 master-2 kubenswrapper[4776]: I1011 10:35:50.063893 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwqr6"] Oct 11 10:35:50.068016 master-2 kubenswrapper[4776]: I1011 10:35:50.067886 4776 scope.go:117] "RemoveContainer" containerID="2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab" Oct 11 10:35:50.073061 master-2 kubenswrapper[4776]: I1011 10:35:50.073021 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" path="/var/lib/kubelet/pods/fa039e2d-e3c6-47a6-ad16-9f189e5a70e7/volumes" Oct 11 10:35:50.073851 master-2 kubenswrapper[4776]: I1011 10:35:50.073816 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mwqr6"] Oct 11 10:35:50.089313 master-2 kubenswrapper[4776]: I1011 10:35:50.086861 4776 scope.go:117] "RemoveContainer" containerID="2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82" Oct 11 10:35:50.114119 master-2 kubenswrapper[4776]: I1011 10:35:50.114079 4776 scope.go:117] "RemoveContainer" containerID="79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a" Oct 11 10:35:50.114587 master-2 kubenswrapper[4776]: E1011 10:35:50.114562 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a\": container with ID starting with 79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a not found: ID does not exist" containerID="79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a" Oct 11 10:35:50.114692 master-2 kubenswrapper[4776]: I1011 10:35:50.114636 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a"} err="failed to get container status \"79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a\": rpc error: code = NotFound desc = could not find container \"79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a\": container with ID starting with 79f4f0e86f6e853840bd24fe245c52f17f44c9ed9c1747b2e887507d7b00b56a not found: ID does not exist" Oct 11 10:35:50.114692 master-2 kubenswrapper[4776]: I1011 10:35:50.114656 4776 scope.go:117] "RemoveContainer" containerID="2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab" Oct 11 10:35:50.117556 master-2 kubenswrapper[4776]: E1011 10:35:50.117006 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab\": container with ID starting with 2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab not found: ID does not exist" containerID="2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab" Oct 11 10:35:50.117556 master-2 kubenswrapper[4776]: I1011 10:35:50.117066 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab"} err="failed to get container status \"2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab\": rpc error: code = NotFound desc = could not find container \"2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab\": container with ID starting with 2012208f52b31c006e50f48e942b59299810e273c2f20c844efce04a05e0b3ab not found: ID does not exist" Oct 11 10:35:50.117556 master-2 kubenswrapper[4776]: I1011 10:35:50.117081 4776 scope.go:117] "RemoveContainer" containerID="2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82" Oct 11 10:35:50.122827 master-2 kubenswrapper[4776]: E1011 10:35:50.122362 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82\": container with ID starting with 2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82 not found: ID does not exist" containerID="2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82" Oct 11 10:35:50.122827 master-2 kubenswrapper[4776]: I1011 10:35:50.122436 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82"} err="failed to get container status \"2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82\": rpc error: code = NotFound desc = could not find container \"2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82\": container with ID starting with 2b5bd81b9a71396ec0fb2662415ff9b813a2b521b1d6b546f96222c12fb23c82 not found: ID does not exist" Oct 11 10:35:50.308732 master-2 kubenswrapper[4776]: I1011 10:35:50.308687 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xtrbk"] Oct 11 10:35:50.312913 master-2 kubenswrapper[4776]: W1011 10:35:50.312869 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1afe0068_3c97_4916_ba53_53f2841a95b0.slice/crio-6ec235918a9169886c8202b0389c297a53a2ef547006f644ae6b3794eb4bb069 WatchSource:0}: Error finding container 6ec235918a9169886c8202b0389c297a53a2ef547006f644ae6b3794eb4bb069: Status 404 returned error can't find the container with id 6ec235918a9169886c8202b0389c297a53a2ef547006f644ae6b3794eb4bb069 Oct 11 10:35:50.944560 master-2 kubenswrapper[4776]: I1011 10:35:50.944503 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5"] Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: E1011 10:35:50.944726 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="extract-content" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: I1011 10:35:50.944742 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="extract-content" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: E1011 10:35:50.944754 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" containerName="route-controller-manager" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: I1011 10:35:50.944760 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" containerName="route-controller-manager" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: E1011 10:35:50.944772 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="registry-server" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: I1011 10:35:50.944779 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="registry-server" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: E1011 10:35:50.944791 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="extract-utilities" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: I1011 10:35:50.944797 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="extract-utilities" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: I1011 10:35:50.944893 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" containerName="registry-server" Oct 11 10:35:50.944913 master-2 kubenswrapper[4776]: I1011 10:35:50.944904 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa039e2d-e3c6-47a6-ad16-9f189e5a70e7" containerName="route-controller-manager" Oct 11 10:35:50.945361 master-2 kubenswrapper[4776]: I1011 10:35:50.945339 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:50.947666 master-2 kubenswrapper[4776]: I1011 10:35:50.947639 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:35:50.948104 master-2 kubenswrapper[4776]: I1011 10:35:50.948073 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:35:50.949028 master-2 kubenswrapper[4776]: I1011 10:35:50.949003 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-vwjkz" Oct 11 10:35:50.949113 master-2 kubenswrapper[4776]: I1011 10:35:50.949033 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:35:50.949113 master-2 kubenswrapper[4776]: I1011 10:35:50.949060 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:35:50.949382 master-2 kubenswrapper[4776]: I1011 10:35:50.949357 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:35:50.955409 master-2 kubenswrapper[4776]: I1011 10:35:50.955320 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5"] Oct 11 10:35:50.965489 master-2 kubenswrapper[4776]: I1011 10:35:50.965437 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-kube-api-access-wkssx\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:50.965663 master-2 kubenswrapper[4776]: I1011 10:35:50.965505 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-config\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:50.965663 master-2 kubenswrapper[4776]: I1011 10:35:50.965567 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-client-ca\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:50.965663 master-2 kubenswrapper[4776]: I1011 10:35:50.965610 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-serving-cert\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:50.973343 master-2 kubenswrapper[4776]: I1011 10:35:50.973270 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:50.973343 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:50.973343 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:50.973343 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:50.973813 master-2 kubenswrapper[4776]: I1011 10:35:50.973344 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:51.017265 master-2 kubenswrapper[4776]: I1011 10:35:51.017218 4776 generic.go:334] "Generic (PLEG): container finished" podID="1afe0068-3c97-4916-ba53-53f2841a95b0" containerID="9df0a820a473c70d90d0917b469efe19d0dee775dca56c297c1256c405278716" exitCode=0 Oct 11 10:35:51.017600 master-2 kubenswrapper[4776]: I1011 10:35:51.017539 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtrbk" event={"ID":"1afe0068-3c97-4916-ba53-53f2841a95b0","Type":"ContainerDied","Data":"9df0a820a473c70d90d0917b469efe19d0dee775dca56c297c1256c405278716"} Oct 11 10:35:51.017741 master-2 kubenswrapper[4776]: I1011 10:35:51.017722 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtrbk" event={"ID":"1afe0068-3c97-4916-ba53-53f2841a95b0","Type":"ContainerStarted","Data":"6ec235918a9169886c8202b0389c297a53a2ef547006f644ae6b3794eb4bb069"} Oct 11 10:35:51.067015 master-2 kubenswrapper[4776]: I1011 10:35:51.066972 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-kube-api-access-wkssx\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.067258 master-2 kubenswrapper[4776]: I1011 10:35:51.067243 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-config\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.067401 master-2 kubenswrapper[4776]: I1011 10:35:51.067384 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-client-ca\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.067520 master-2 kubenswrapper[4776]: I1011 10:35:51.067504 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-serving-cert\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.068607 master-2 kubenswrapper[4776]: I1011 10:35:51.068576 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-config\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.068799 master-2 kubenswrapper[4776]: I1011 10:35:51.068760 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-client-ca\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.070791 master-2 kubenswrapper[4776]: I1011 10:35:51.070759 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-serving-cert\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.084181 master-2 kubenswrapper[4776]: I1011 10:35:51.084130 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-kube-api-access-wkssx\") pod \"route-controller-manager-68b68f45cd-29wh5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.267152 master-2 kubenswrapper[4776]: I1011 10:35:51.267031 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:51.649314 master-2 kubenswrapper[4776]: I1011 10:35:51.649272 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5"] Oct 11 10:35:51.652896 master-2 kubenswrapper[4776]: W1011 10:35:51.652853 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97bde30f_16ad_44f5_ac26_9f0ba5ba74f5.slice/crio-417196648a53013bc23124ecf6d1bf221fe3949e2ac324623e298e42a8c1ca2b WatchSource:0}: Error finding container 417196648a53013bc23124ecf6d1bf221fe3949e2ac324623e298e42a8c1ca2b: Status 404 returned error can't find the container with id 417196648a53013bc23124ecf6d1bf221fe3949e2ac324623e298e42a8c1ca2b Oct 11 10:35:51.969778 master-2 kubenswrapper[4776]: I1011 10:35:51.969701 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:51.969778 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:51.969778 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:51.969778 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:51.969778 master-2 kubenswrapper[4776]: I1011 10:35:51.969762 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:52.026862 master-2 kubenswrapper[4776]: I1011 10:35:52.026813 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtrbk" event={"ID":"1afe0068-3c97-4916-ba53-53f2841a95b0","Type":"ContainerStarted","Data":"9fa35ef84abfec8ae375122e25146737d5391f1e1c18440583ca1f3d3b0910b8"} Oct 11 10:35:52.030910 master-2 kubenswrapper[4776]: I1011 10:35:52.030867 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" event={"ID":"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5","Type":"ContainerStarted","Data":"9278bc0b71056762d1acbc3fed930331878f846bd5deefeb8a2b904499d18eb2"} Oct 11 10:35:52.030910 master-2 kubenswrapper[4776]: I1011 10:35:52.030917 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" event={"ID":"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5","Type":"ContainerStarted","Data":"417196648a53013bc23124ecf6d1bf221fe3949e2ac324623e298e42a8c1ca2b"} Oct 11 10:35:52.031186 master-2 kubenswrapper[4776]: I1011 10:35:52.031142 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:52.059170 master-2 kubenswrapper[4776]: I1011 10:35:52.059111 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" podStartSLOduration=3.059093097 podStartE2EDuration="3.059093097s" podCreationTimestamp="2025-10-11 10:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:35:52.057992578 +0000 UTC m=+586.842419287" watchObservedRunningTime="2025-10-11 10:35:52.059093097 +0000 UTC m=+586.843519806" Oct 11 10:35:52.065992 master-2 kubenswrapper[4776]: I1011 10:35:52.065951 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444ea5b2-c9dc-4685-9f66-2273b30d9045" path="/var/lib/kubelet/pods/444ea5b2-c9dc-4685-9f66-2273b30d9045/volumes" Oct 11 10:35:52.408419 master-2 kubenswrapper[4776]: I1011 10:35:52.408360 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:35:52.969592 master-2 kubenswrapper[4776]: I1011 10:35:52.969517 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:52.969592 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:52.969592 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:52.969592 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:52.969877 master-2 kubenswrapper[4776]: I1011 10:35:52.969594 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:53.038231 master-2 kubenswrapper[4776]: I1011 10:35:53.038169 4776 generic.go:334] "Generic (PLEG): container finished" podID="1afe0068-3c97-4916-ba53-53f2841a95b0" containerID="9fa35ef84abfec8ae375122e25146737d5391f1e1c18440583ca1f3d3b0910b8" exitCode=0 Oct 11 10:35:53.038715 master-2 kubenswrapper[4776]: I1011 10:35:53.038224 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtrbk" event={"ID":"1afe0068-3c97-4916-ba53-53f2841a95b0","Type":"ContainerDied","Data":"9fa35ef84abfec8ae375122e25146737d5391f1e1c18440583ca1f3d3b0910b8"} Oct 11 10:35:53.970412 master-2 kubenswrapper[4776]: I1011 10:35:53.970353 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:53.970412 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:53.970412 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:53.970412 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:53.970412 master-2 kubenswrapper[4776]: I1011 10:35:53.970416 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:54.045729 master-2 kubenswrapper[4776]: I1011 10:35:54.045642 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtrbk" event={"ID":"1afe0068-3c97-4916-ba53-53f2841a95b0","Type":"ContainerStarted","Data":"6e709bd2044b6a322588e9fb5ed29a0cb190a96dddf3e5653cdd857e40bc453e"} Oct 11 10:35:54.069770 master-2 kubenswrapper[4776]: I1011 10:35:54.069697 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xtrbk" podStartSLOduration=2.651084953 podStartE2EDuration="5.069661545s" podCreationTimestamp="2025-10-11 10:35:49 +0000 UTC" firstStartedPulling="2025-10-11 10:35:51.01850293 +0000 UTC m=+585.802929639" lastFinishedPulling="2025-10-11 10:35:53.437079522 +0000 UTC m=+588.221506231" observedRunningTime="2025-10-11 10:35:54.06756879 +0000 UTC m=+588.851995499" watchObservedRunningTime="2025-10-11 10:35:54.069661545 +0000 UTC m=+588.854088244" Oct 11 10:35:54.969628 master-2 kubenswrapper[4776]: I1011 10:35:54.969470 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:54.969628 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:54.969628 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:54.969628 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:54.969628 master-2 kubenswrapper[4776]: I1011 10:35:54.969546 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:55.969756 master-2 kubenswrapper[4776]: I1011 10:35:55.969686 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:55.969756 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:55.969756 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:55.969756 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:55.969756 master-2 kubenswrapper[4776]: I1011 10:35:55.969743 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:56.969625 master-2 kubenswrapper[4776]: I1011 10:35:56.969551 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:56.969625 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:56.969625 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:56.969625 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:56.969625 master-2 kubenswrapper[4776]: I1011 10:35:56.969619 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:57.968510 master-2 kubenswrapper[4776]: I1011 10:35:57.968444 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:57.968510 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:57.968510 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:57.968510 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:57.968510 master-2 kubenswrapper[4776]: I1011 10:35:57.968493 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:58.968566 master-2 kubenswrapper[4776]: I1011 10:35:58.968485 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:58.968566 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:58.968566 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:58.968566 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:58.968566 master-2 kubenswrapper[4776]: I1011 10:35:58.968553 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:35:59.901486 master-2 kubenswrapper[4776]: I1011 10:35:59.901418 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:59.901486 master-2 kubenswrapper[4776]: I1011 10:35:59.901494 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:59.961786 master-2 kubenswrapper[4776]: I1011 10:35:59.961735 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:35:59.970956 master-2 kubenswrapper[4776]: I1011 10:35:59.970925 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:35:59.970956 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:35:59.970956 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:35:59.970956 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:35:59.971629 master-2 kubenswrapper[4776]: I1011 10:35:59.971595 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:00.119032 master-2 kubenswrapper[4776]: I1011 10:36:00.118958 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xtrbk" Oct 11 10:36:00.968568 master-2 kubenswrapper[4776]: I1011 10:36:00.968483 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:00.968568 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:00.968568 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:00.968568 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:00.968568 master-2 kubenswrapper[4776]: I1011 10:36:00.968542 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:01.969822 master-2 kubenswrapper[4776]: I1011 10:36:01.969709 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:01.969822 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:01.969822 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:01.969822 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:01.970862 master-2 kubenswrapper[4776]: I1011 10:36:01.970822 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:02.729131 master-2 kubenswrapper[4776]: E1011 10:36:02.729044 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:36:02.729131 master-2 kubenswrapper[4776]: E1011 10:36:02.729119 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:36:34.729103018 +0000 UTC m=+629.513529727 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:36:02.970357 master-2 kubenswrapper[4776]: I1011 10:36:02.970290 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:02.970357 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:02.970357 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:02.970357 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:02.971064 master-2 kubenswrapper[4776]: I1011 10:36:02.970366 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:03.970280 master-2 kubenswrapper[4776]: I1011 10:36:03.970191 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:03.970280 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:03.970280 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:03.970280 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:03.971339 master-2 kubenswrapper[4776]: I1011 10:36:03.970295 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:04.970349 master-2 kubenswrapper[4776]: I1011 10:36:04.970278 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:04.970349 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:04.970349 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:04.970349 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:04.970349 master-2 kubenswrapper[4776]: I1011 10:36:04.970362 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:05.980624 master-2 kubenswrapper[4776]: I1011 10:36:05.980517 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:05.980624 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:05.980624 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:05.980624 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:05.982476 master-2 kubenswrapper[4776]: I1011 10:36:05.980631 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:06.970629 master-2 kubenswrapper[4776]: I1011 10:36:06.970416 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:06.970629 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:06.970629 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:06.970629 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:06.970629 master-2 kubenswrapper[4776]: I1011 10:36:06.970517 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:07.969695 master-2 kubenswrapper[4776]: I1011 10:36:07.969584 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:07.969695 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:07.969695 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:07.969695 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:07.970331 master-2 kubenswrapper[4776]: I1011 10:36:07.969743 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:08.972298 master-2 kubenswrapper[4776]: I1011 10:36:08.969661 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:08.972298 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:08.972298 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:08.972298 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:08.972298 master-2 kubenswrapper[4776]: I1011 10:36:08.969803 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:09.970831 master-2 kubenswrapper[4776]: I1011 10:36:09.970743 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:09.970831 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:09.970831 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:09.970831 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:09.971328 master-2 kubenswrapper[4776]: I1011 10:36:09.970852 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:10.970104 master-2 kubenswrapper[4776]: I1011 10:36:10.970015 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:10.970104 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:10.970104 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:10.970104 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:10.970104 master-2 kubenswrapper[4776]: I1011 10:36:10.970095 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:11.270878 master-2 kubenswrapper[4776]: E1011 10:36:11.270505 4776 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T10:36:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T10:36:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T10:36:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T10:36:01Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95\\\"],\\\"sizeBytes\\\":1565215279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e\\\"],\\\"sizeBytes\\\":1230574268},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:19171ea92892e53aa0604cd2c0b649c40966da57d9eac1a65807285eb30e4ae1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cb9acd88d372170c9d9491de391f25c2d29c04ae39825a0afc50a06fcc9a7f4c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1195809171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876\\\"],\\\"sizeBytes\\\":981963385},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d\\\"],\\\"sizeBytes\\\":945482213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5\\\"],\\\"sizeBytes\\\":911296197},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6458d944052d69ffeffc62813d3a5cc3344ce7091b6df0ebf54d73c861355b01\\\"],\\\"sizeBytes\\\":873399372},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb\\\"],\\\"sizeBytes\\\":869140966},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f\\\"],\\\"sizeBytes\\\":855643597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7015eb7a0d62afeba6f2f0dbd57a8ef24b8477b00f66a6789ccf97b78271e9a\\\"],\\\"sizeBytes\\\":855233892},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7af38d71db3427e74eee755c1dc72589ae723a71d678c920c32868f459028ca\\\"],\\\"sizeBytes\\\":774809152},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207\\\"],\\\"sizeBytes\\\":684971018},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326\\\"],\\\"sizeBytes\\\":681716323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2fe368c29648f07f2b0f3849feef0eda2000555e91d268e2b5a19526179619c\\\"],\\\"sizeBytes\\\":680965375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1656551c63dc1b09263ccc5fb52a13dff12d57e1c7510529789df1b41d253aa9\\\"],\\\"sizeBytes\\\":614682093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7\\\"],\\\"sizeBytes\\\":582409947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a\\\"],\\\"sizeBytes\\\":575181628},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6\\\"],\\\"sizeBytes\\\":551247630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c78b39674bd52b55017e08466030e88727f76514fbfa4e1918541697374881b3\\\"],\\\"sizeBytes\\\":541801559},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8\\\"],\\\"sizeBytes\\\":531186824},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e\\\"],\\\"sizeBytes\\\":530836538},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174\\\"],\\\"sizeBytes\\\":511412209},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be\\\"],\\\"sizeBytes\\\":511020601},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e\\\"],\\\"sizeBytes\\\":508004341},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32\\\"],\\\"sizeBytes\\\":506615759},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d\\\"],\\\"sizeBytes\\\":506261367},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ef76839c19a20a0e01cdd2b9fd53ae31937d6f478b2c2343679099985fe9e47\\\"],\\\"sizeBytes\\\":505315113},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a\\\"],\\\"sizeBytes\\\":504222816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:208d81ddcca0864f3a225e11a2fdcf7c67d32bae142bd9a9d154a76cffea08e7\\\"],\\\"sizeBytes\\\":504201850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55\\\"],\\\"sizeBytes\\\":501914388},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97de153ac76971fa69d4af7166c63416fbe37d759deb7833340c1c39d418b745\\\"],\\\"sizeBytes\\\":501585296},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b\\\"],\\\"sizeBytes\\\":501010081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05bf4bdb9af40d949fa343ad1fd1d79d032d0bd0eb188ed33fbdceeb5056ce0\\\"],\\\"sizeBytes\\\":499517132},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642\\\"],\\\"sizeBytes\\\":499422833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404\\\"],\\\"sizeBytes\\\":498371692},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4\\\"],\\\"sizeBytes\\\":498279559},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce\\\"],\\\"sizeBytes\\\":497698695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34\\\"],\\\"sizeBytes\\\":497656412},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8\\\"],\\\"sizeBytes\\\":489230204},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa10afc83b17b0d76fcff8963f51e62ae851f145cd6c27f61a0604e0c713fe3a\\\"],\\\"sizeBytes\\\":489030103},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94bcc0ff0f9ec7df4aeb53fe4bf0310e26cb7b40bdf772efc95a7ccfcfe69721\\\"],\\\"sizeBytes\\\":488102305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08\\\"],\\\"sizeBytes\\\":480132757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:732db322c7ea7d239293fdd893e493775fd05ed4370bfe908c6995d4beabc0a4\\\"],\\\"sizeBytes\\\":477490934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7\\\"],\\\"sizeBytes\\\":477215701},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f\\\"],\\\"sizeBytes\\\":464468268},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ca84dadf413f08150ff8224f856cca12667b15168499013d0ff409dd323505d\\\"],\\\"sizeBytes\\\":463860143},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:90c5ef075961ab090e3854d470bb6659737ee76ac96637e6d0dd62080e38e26e\\\"],\\\"sizeBytes\\\":463718256},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5ad9f2d4b8cf9205c5aa91b1eb9abafc2a638c7bd4b3f971f3d6b9a4df7318f\\\"],\\\"sizeBytes\\\":461301475},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2bffa697d52826e0ba76ddc30a78f44b274be22ee87af8d1a9d1c8337162be9\\\"],\\\"sizeBytes\\\":460276288},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\\\"],\\\"sizeBytes\\\":458126368}]}}\" for node \"master-2\": the server was unable to return a response in the time allotted, but may still be processing the request (patch nodes master-2)" Oct 11 10:36:11.970474 master-2 kubenswrapper[4776]: I1011 10:36:11.970259 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:11.970474 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:11.970474 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:11.970474 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:11.970474 master-2 kubenswrapper[4776]: I1011 10:36:11.970362 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:12.245523 master-2 kubenswrapper[4776]: E1011 10:36:12.245381 4776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.ocp.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:36:12.971002 master-2 kubenswrapper[4776]: I1011 10:36:12.970922 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:12.971002 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:12.971002 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:12.971002 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:12.972059 master-2 kubenswrapper[4776]: I1011 10:36:12.971063 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:13.969488 master-2 kubenswrapper[4776]: I1011 10:36:13.969399 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:13.969488 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:13.969488 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:13.969488 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:13.969488 master-2 kubenswrapper[4776]: I1011 10:36:13.969481 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:14.970919 master-2 kubenswrapper[4776]: I1011 10:36:14.970857 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:14.970919 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:14.970919 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:14.970919 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:14.971975 master-2 kubenswrapper[4776]: I1011 10:36:14.971854 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:15.970289 master-2 kubenswrapper[4776]: I1011 10:36:15.970189 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:15.970289 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:15.970289 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:15.970289 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:15.970289 master-2 kubenswrapper[4776]: I1011 10:36:15.970266 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:16.970057 master-2 kubenswrapper[4776]: I1011 10:36:16.969963 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:16.970057 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:16.970057 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:16.970057 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:16.971092 master-2 kubenswrapper[4776]: I1011 10:36:16.970086 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:17.970270 master-2 kubenswrapper[4776]: I1011 10:36:17.970197 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:17.970270 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:17.970270 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:17.970270 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:17.970270 master-2 kubenswrapper[4776]: I1011 10:36:17.970273 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:18.970544 master-2 kubenswrapper[4776]: I1011 10:36:18.970437 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:18.970544 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:18.970544 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:18.970544 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:18.971511 master-2 kubenswrapper[4776]: I1011 10:36:18.970588 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:19.970473 master-2 kubenswrapper[4776]: I1011 10:36:19.970401 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:19.970473 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:19.970473 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:19.970473 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:19.971181 master-2 kubenswrapper[4776]: I1011 10:36:19.970493 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:20.969482 master-2 kubenswrapper[4776]: I1011 10:36:20.969403 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:20.969482 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:20.969482 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:20.969482 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:20.969970 master-2 kubenswrapper[4776]: I1011 10:36:20.969509 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:21.271890 master-2 kubenswrapper[4776]: E1011 10:36:21.271659 4776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-2\": Get \"https://api-int.ocp.openstack.lab:6443/api/v1/nodes/master-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:36:21.969704 master-2 kubenswrapper[4776]: I1011 10:36:21.969577 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:21.969704 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:21.969704 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:21.969704 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:21.970197 master-2 kubenswrapper[4776]: I1011 10:36:21.969728 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:22.245860 master-2 kubenswrapper[4776]: E1011 10:36:22.245618 4776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.ocp.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s\": context deadline exceeded" Oct 11 10:36:22.970448 master-2 kubenswrapper[4776]: I1011 10:36:22.970327 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:22.970448 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:22.970448 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:22.970448 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:22.970448 master-2 kubenswrapper[4776]: I1011 10:36:22.970411 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:23.969951 master-2 kubenswrapper[4776]: I1011 10:36:23.969815 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:23.969951 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:23.969951 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:23.969951 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:23.969951 master-2 kubenswrapper[4776]: I1011 10:36:23.969940 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:24.968955 master-2 kubenswrapper[4776]: I1011 10:36:24.968875 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:24.968955 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:24.968955 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:24.968955 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:24.969317 master-2 kubenswrapper[4776]: I1011 10:36:24.968966 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:25.969025 master-2 kubenswrapper[4776]: I1011 10:36:25.968925 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:25.969025 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:25.969025 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:25.969025 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:25.969025 master-2 kubenswrapper[4776]: I1011 10:36:25.969020 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:26.969488 master-2 kubenswrapper[4776]: I1011 10:36:26.969387 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:26.969488 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:26.969488 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:26.969488 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:26.969488 master-2 kubenswrapper[4776]: I1011 10:36:26.969443 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:27.970502 master-2 kubenswrapper[4776]: I1011 10:36:27.970415 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:27.970502 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:27.970502 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:27.970502 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:27.970502 master-2 kubenswrapper[4776]: I1011 10:36:27.970499 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:28.969478 master-2 kubenswrapper[4776]: I1011 10:36:28.969418 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:28.969478 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:28.969478 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:28.969478 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:28.970440 master-2 kubenswrapper[4776]: I1011 10:36:28.969492 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:29.969793 master-2 kubenswrapper[4776]: I1011 10:36:29.969730 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:29.969793 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:29.969793 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:29.969793 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:29.969793 master-2 kubenswrapper[4776]: I1011 10:36:29.969791 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:30.968591 master-2 kubenswrapper[4776]: I1011 10:36:30.968549 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:30.968591 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:30.968591 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:30.968591 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:30.969009 master-2 kubenswrapper[4776]: I1011 10:36:30.968980 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:31.272629 master-2 kubenswrapper[4776]: E1011 10:36:31.272499 4776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-2\": Get \"https://api-int.ocp.openstack.lab:6443/api/v1/nodes/master-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:36:31.969920 master-2 kubenswrapper[4776]: I1011 10:36:31.969810 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:31.969920 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:31.969920 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:31.969920 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:31.969920 master-2 kubenswrapper[4776]: I1011 10:36:31.969904 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:32.247077 master-2 kubenswrapper[4776]: E1011 10:36:32.246929 4776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.ocp.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:36:32.969455 master-2 kubenswrapper[4776]: I1011 10:36:32.969375 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:32.969455 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:32.969455 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:32.969455 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:32.969455 master-2 kubenswrapper[4776]: I1011 10:36:32.969444 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:33.969868 master-2 kubenswrapper[4776]: I1011 10:36:33.969736 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:33.969868 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:33.969868 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:33.969868 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:33.970581 master-2 kubenswrapper[4776]: I1011 10:36:33.969864 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:34.803047 master-2 kubenswrapper[4776]: E1011 10:36:34.802999 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:36:34.803243 master-2 kubenswrapper[4776]: E1011 10:36:34.803219 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:37:38.803092955 +0000 UTC m=+693.587519684 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:36:34.970279 master-2 kubenswrapper[4776]: I1011 10:36:34.970204 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:34.970279 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:34.970279 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:34.970279 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:34.970279 master-2 kubenswrapper[4776]: I1011 10:36:34.970270 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:35.970207 master-2 kubenswrapper[4776]: I1011 10:36:35.970123 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:35.970207 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:35.970207 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:35.970207 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:35.970207 master-2 kubenswrapper[4776]: I1011 10:36:35.970198 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:36.734090 master-2 kubenswrapper[4776]: E1011 10:36:36.733967 4776 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{metrics-server-65d86dff78-crzgp.186d696977ca9b1f openshift-monitoring 14134 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:metrics-server-65d86dff78-crzgp,UID:5473628e-94c8-4706-bb03-ff4836debe5f,APIVersion:v1,ResourceVersion:10129,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"client-ca-bundle\" : secret \"metrics-server-ap7ej74ueigk4\" not found,Source:EventSource{Component:kubelet,Host:master-2,},FirstTimestamp:2025-10-11 10:35:31 +0000 UTC,LastTimestamp:2025-10-11 10:36:02.729086168 +0000 UTC m=+597.513512877,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-2,}" Oct 11 10:36:36.969653 master-2 kubenswrapper[4776]: I1011 10:36:36.969524 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:36.969653 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:36.969653 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:36.969653 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:36.970104 master-2 kubenswrapper[4776]: I1011 10:36:36.969646 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:37.968832 master-2 kubenswrapper[4776]: I1011 10:36:37.968758 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:37.968832 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:37.968832 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:37.968832 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:37.969496 master-2 kubenswrapper[4776]: I1011 10:36:37.968837 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:38.970491 master-2 kubenswrapper[4776]: I1011 10:36:38.970403 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:38.970491 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:38.970491 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:38.970491 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:38.970491 master-2 kubenswrapper[4776]: I1011 10:36:38.970480 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:39.970480 master-2 kubenswrapper[4776]: I1011 10:36:39.970406 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:39.970480 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:39.970480 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:39.970480 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:39.971223 master-2 kubenswrapper[4776]: I1011 10:36:39.970502 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:40.969522 master-2 kubenswrapper[4776]: I1011 10:36:40.969434 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:40.969522 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:40.969522 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:40.969522 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:40.969522 master-2 kubenswrapper[4776]: I1011 10:36:40.969500 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:41.273812 master-2 kubenswrapper[4776]: E1011 10:36:41.273565 4776 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-2\": Get \"https://api-int.ocp.openstack.lab:6443/api/v1/nodes/master-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:36:41.970233 master-2 kubenswrapper[4776]: I1011 10:36:41.970118 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:41.970233 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:41.970233 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:41.970233 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:41.970233 master-2 kubenswrapper[4776]: I1011 10:36:41.970223 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:42.248035 master-2 kubenswrapper[4776]: E1011 10:36:42.247842 4776 controller.go:195] "Failed to update lease" err="Put \"https://api-int.ocp.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:36:42.970447 master-2 kubenswrapper[4776]: I1011 10:36:42.970351 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:42.970447 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:42.970447 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:42.970447 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:42.972100 master-2 kubenswrapper[4776]: I1011 10:36:42.970451 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:43.970854 master-2 kubenswrapper[4776]: I1011 10:36:43.970740 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:43.970854 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:43.970854 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:43.970854 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:43.970854 master-2 kubenswrapper[4776]: I1011 10:36:43.970805 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:44.970705 master-2 kubenswrapper[4776]: I1011 10:36:44.970617 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:44.970705 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:44.970705 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:44.970705 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:44.970705 master-2 kubenswrapper[4776]: I1011 10:36:44.970698 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:45.970167 master-2 kubenswrapper[4776]: I1011 10:36:45.970095 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:45.970167 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:45.970167 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:45.970167 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:45.970767 master-2 kubenswrapper[4776]: I1011 10:36:45.970725 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:46.297292 master-2 kubenswrapper[4776]: I1011 10:36:46.297082 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory" Oct 11 10:36:46.297292 master-2 kubenswrapper[4776]: I1011 10:36:46.297141 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure" Oct 11 10:36:46.297292 master-2 kubenswrapper[4776]: I1011 10:36:46.297157 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeHasSufficientPID" Oct 11 10:36:46.297292 master-2 kubenswrapper[4776]: I1011 10:36:46.297180 4776 kubelet_node_status.go:724] "Recording event message for node" node="master-2" event="NodeReady" Oct 11 10:36:46.971034 master-2 kubenswrapper[4776]: I1011 10:36:46.970899 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:46.971034 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:46.971034 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:46.971034 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:46.971470 master-2 kubenswrapper[4776]: I1011 10:36:46.971066 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:47.970386 master-2 kubenswrapper[4776]: I1011 10:36:47.970268 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:47.970386 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:47.970386 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:47.970386 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:47.970386 master-2 kubenswrapper[4776]: I1011 10:36:47.970361 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:48.968997 master-2 kubenswrapper[4776]: I1011 10:36:48.968894 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:48.968997 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:48.968997 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:48.968997 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:48.969433 master-2 kubenswrapper[4776]: I1011 10:36:48.969035 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:49.970041 master-2 kubenswrapper[4776]: I1011 10:36:49.969941 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:49.970041 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:49.970041 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:49.970041 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:49.970989 master-2 kubenswrapper[4776]: I1011 10:36:49.970094 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:50.969714 master-2 kubenswrapper[4776]: I1011 10:36:50.969652 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:50.969714 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:50.969714 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:50.969714 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:50.970085 master-2 kubenswrapper[4776]: I1011 10:36:50.969719 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:51.970181 master-2 kubenswrapper[4776]: I1011 10:36:51.970086 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:51.970181 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:51.970181 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:51.970181 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:51.970787 master-2 kubenswrapper[4776]: I1011 10:36:51.970179 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:52.970370 master-2 kubenswrapper[4776]: I1011 10:36:52.970234 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:52.970370 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:52.970370 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:52.970370 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:52.970370 master-2 kubenswrapper[4776]: I1011 10:36:52.970386 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:53.968977 master-2 kubenswrapper[4776]: I1011 10:36:53.968851 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:53.968977 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:53.968977 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:53.968977 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:53.968977 master-2 kubenswrapper[4776]: I1011 10:36:53.968957 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:54.971323 master-2 kubenswrapper[4776]: I1011 10:36:54.971259 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:54.971323 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:54.971323 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:54.971323 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:54.971323 master-2 kubenswrapper[4776]: I1011 10:36:54.971316 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:55.970622 master-2 kubenswrapper[4776]: I1011 10:36:55.970527 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:55.970622 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:55.970622 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:55.970622 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:55.970931 master-2 kubenswrapper[4776]: I1011 10:36:55.970646 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:56.970500 master-2 kubenswrapper[4776]: I1011 10:36:56.970324 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:56.970500 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:56.970500 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:56.970500 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:56.971823 master-2 kubenswrapper[4776]: I1011 10:36:56.970505 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:57.968698 master-2 kubenswrapper[4776]: I1011 10:36:57.968603 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:57.968698 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:57.968698 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:57.968698 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:57.968698 master-2 kubenswrapper[4776]: I1011 10:36:57.968663 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:58.969116 master-2 kubenswrapper[4776]: I1011 10:36:58.969014 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:58.969116 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:58.969116 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:58.969116 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:58.969116 master-2 kubenswrapper[4776]: I1011 10:36:58.969100 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:36:59.964810 master-2 kubenswrapper[4776]: I1011 10:36:59.964735 4776 status_manager.go:851] "Failed to get status for pod" podUID="1afe0068-3c97-4916-ba53-53f2841a95b0" pod="openshift-marketplace/certified-operators-xtrbk" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods certified-operators-xtrbk)" Oct 11 10:36:59.970453 master-2 kubenswrapper[4776]: I1011 10:36:59.970419 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:36:59.970453 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:36:59.970453 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:36:59.970453 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:36:59.970894 master-2 kubenswrapper[4776]: I1011 10:36:59.970453 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:00.969618 master-2 kubenswrapper[4776]: I1011 10:37:00.969584 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:37:00.969618 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:37:00.969618 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:37:00.969618 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:37:00.970086 master-2 kubenswrapper[4776]: I1011 10:37:00.970062 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:01.969445 master-2 kubenswrapper[4776]: I1011 10:37:01.969372 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:37:01.969445 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:37:01.969445 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:37:01.969445 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:37:01.969445 master-2 kubenswrapper[4776]: I1011 10:37:01.969427 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:02.969820 master-2 kubenswrapper[4776]: I1011 10:37:02.969738 4776 patch_prober.go:28] interesting pod/router-default-5ddb89f76-57kcw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 10:37:02.969820 master-2 kubenswrapper[4776]: [-]has-synced failed: reason withheld Oct 11 10:37:02.969820 master-2 kubenswrapper[4776]: [+]process-running ok Oct 11 10:37:02.969820 master-2 kubenswrapper[4776]: healthz check failed Oct 11 10:37:02.970455 master-2 kubenswrapper[4776]: I1011 10:37:02.969824 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:02.970455 master-2 kubenswrapper[4776]: I1011 10:37:02.969880 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:37:02.970564 master-2 kubenswrapper[4776]: I1011 10:37:02.970528 4776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"37bb400b73bf025924c7c9c5bb9e1d981b6e77aaa21f8e234850cbe27200bcf9"} pod="openshift-ingress/router-default-5ddb89f76-57kcw" containerMessage="Container router failed startup probe, will be restarted" Oct 11 10:37:02.970612 master-2 kubenswrapper[4776]: I1011 10:37:02.970575 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5ddb89f76-57kcw" podUID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerName="router" containerID="cri-o://37bb400b73bf025924c7c9c5bb9e1d981b6e77aaa21f8e234850cbe27200bcf9" gracePeriod=3600 Oct 11 10:37:21.705463 master-2 kubenswrapper[4776]: I1011 10:37:21.705363 4776 generic.go:334] "Generic (PLEG): container finished" podID="7652e0ca-2d18-48c7-80e0-f4a936038377" containerID="bee07b3499003457995a526e2769ae6950a3ee1b71df0e623d05c583f95fa09d" exitCode=0 Oct 11 10:37:21.705463 master-2 kubenswrapper[4776]: I1011 10:37:21.705459 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" event={"ID":"7652e0ca-2d18-48c7-80e0-f4a936038377","Type":"ContainerDied","Data":"bee07b3499003457995a526e2769ae6950a3ee1b71df0e623d05c583f95fa09d"} Oct 11 10:37:21.706758 master-2 kubenswrapper[4776]: I1011 10:37:21.706453 4776 scope.go:117] "RemoveContainer" containerID="bee07b3499003457995a526e2769ae6950a3ee1b71df0e623d05c583f95fa09d" Oct 11 10:37:22.717174 master-2 kubenswrapper[4776]: I1011 10:37:22.717072 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" event={"ID":"7652e0ca-2d18-48c7-80e0-f4a936038377","Type":"ContainerStarted","Data":"7dae94882449018d204394aae895d50458bd4e4a4aa658882d690763bdb1bc8d"} Oct 11 10:37:22.720201 master-2 kubenswrapper[4776]: I1011 10:37:22.717703 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:37:22.720201 master-2 kubenswrapper[4776]: I1011 10:37:22.719253 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd" Oct 11 10:37:37.534761 master-2 kubenswrapper[4776]: I1011 10:37:37.534156 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 11 10:37:37.561067 master-2 kubenswrapper[4776]: I1011 10:37:37.561001 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.563280 master-2 kubenswrapper[4776]: I1011 10:37:37.563249 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 11 10:37:37.566392 master-2 kubenswrapper[4776]: I1011 10:37:37.566358 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.570346 master-2 kubenswrapper[4776]: I1011 10:37:37.570320 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 11 10:37:37.571879 master-2 kubenswrapper[4776]: I1011 10:37:37.571852 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Oct 11 10:37:37.579149 master-2 kubenswrapper[4776]: I1011 10:37:37.579099 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 11 10:37:37.579696 master-2 kubenswrapper[4776]: I1011 10:37:37.579639 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:37:37.580409 master-2 kubenswrapper[4776]: I1011 10:37:37.580380 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.584313 master-2 kubenswrapper[4776]: I1011 10:37:37.584280 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.586632 master-2 kubenswrapper[4776]: I1011 10:37:37.586592 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Oct 11 10:37:37.587885 master-2 kubenswrapper[4776]: I1011 10:37:37.587853 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 11 10:37:37.616853 master-2 kubenswrapper[4776]: I1011 10:37:37.616782 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"openshift-service-ca.crt" Oct 11 10:37:37.617548 master-2 kubenswrapper[4776]: I1011 10:37:37.617515 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.618152 master-2 kubenswrapper[4776]: I1011 10:37:37.618104 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Oct 11 10:37:37.619308 master-2 kubenswrapper[4776]: I1011 10:37:37.619271 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 11 10:37:37.630483 master-2 kubenswrapper[4776]: I1011 10:37:37.630431 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 11 10:37:37.634022 master-2 kubenswrapper[4776]: I1011 10:37:37.633983 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 11 10:37:37.639159 master-2 kubenswrapper[4776]: I1011 10:37:37.639131 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:37:37.647917 master-2 kubenswrapper[4776]: I1011 10:37:37.647868 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 11 10:37:37.671805 master-2 kubenswrapper[4776]: I1011 10:37:37.671752 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 11 10:37:37.672280 master-2 kubenswrapper[4776]: I1011 10:37:37.672236 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 11 10:37:37.677527 master-2 kubenswrapper[4776]: I1011 10:37:37.677499 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 11 10:37:37.683277 master-2 kubenswrapper[4776]: I1011 10:37:37.683260 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 11 10:37:37.693623 master-2 kubenswrapper[4776]: I1011 10:37:37.693550 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 11 10:37:37.696005 master-2 kubenswrapper[4776]: I1011 10:37:37.695973 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Oct 11 10:37:37.697251 master-2 kubenswrapper[4776]: I1011 10:37:37.697220 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"openshift-service-ca.crt" Oct 11 10:37:37.698422 master-2 kubenswrapper[4776]: I1011 10:37:37.698398 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Oct 11 10:37:37.708513 master-2 kubenswrapper[4776]: I1011 10:37:37.708473 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:37:37.717350 master-2 kubenswrapper[4776]: I1011 10:37:37.717324 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 11 10:37:37.725757 master-2 kubenswrapper[4776]: I1011 10:37:37.725731 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Oct 11 10:37:37.725968 master-2 kubenswrapper[4776]: I1011 10:37:37.725937 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 11 10:37:37.730857 master-2 kubenswrapper[4776]: I1011 10:37:37.730811 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 11 10:37:37.742176 master-2 kubenswrapper[4776]: I1011 10:37:37.742142 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 11 10:37:37.746275 master-2 kubenswrapper[4776]: I1011 10:37:37.746238 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.757727 master-2 kubenswrapper[4776]: I1011 10:37:37.757650 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:37:37.760386 master-2 kubenswrapper[4776]: I1011 10:37:37.760360 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Oct 11 10:37:37.760873 master-2 kubenswrapper[4776]: I1011 10:37:37.760855 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 11 10:37:37.761144 master-2 kubenswrapper[4776]: I1011 10:37:37.761095 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:37:37.761645 master-2 kubenswrapper[4776]: I1011 10:37:37.761459 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 11 10:37:37.763528 master-2 kubenswrapper[4776]: I1011 10:37:37.763488 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 11 10:37:37.771450 master-2 kubenswrapper[4776]: I1011 10:37:37.771404 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:37:37.774350 master-2 kubenswrapper[4776]: I1011 10:37:37.774301 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Oct 11 10:37:37.779463 master-2 kubenswrapper[4776]: I1011 10:37:37.779383 4776 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 11 10:37:37.782260 master-2 kubenswrapper[4776]: I1011 10:37:37.782218 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 11 10:37:37.783202 master-2 kubenswrapper[4776]: I1011 10:37:37.783073 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.783275 master-2 kubenswrapper[4776]: I1011 10:37:37.783240 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 11 10:37:37.787906 master-2 kubenswrapper[4776]: I1011 10:37:37.787641 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.790917 master-2 kubenswrapper[4776]: I1011 10:37:37.790595 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 10:37:37.791094 master-2 kubenswrapper[4776]: I1011 10:37:37.790990 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-r24pr"] Oct 11 10:37:37.792183 master-2 kubenswrapper[4776]: I1011 10:37:37.792134 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.793551 master-2 kubenswrapper[4776]: I1011 10:37:37.793370 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 11 10:37:37.800708 master-2 kubenswrapper[4776]: I1011 10:37:37.799586 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 11 10:37:37.800708 master-2 kubenswrapper[4776]: I1011 10:37:37.799924 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 11 10:37:37.800708 master-2 kubenswrapper[4776]: I1011 10:37:37.800020 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 11 10:37:37.800952 master-2 kubenswrapper[4776]: I1011 10:37:37.800783 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 11 10:37:37.804777 master-2 kubenswrapper[4776]: I1011 10:37:37.801873 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 11 10:37:37.804777 master-2 kubenswrapper[4776]: I1011 10:37:37.802069 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 11 10:37:37.804777 master-2 kubenswrapper[4776]: I1011 10:37:37.803951 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 11 10:37:37.804777 master-2 kubenswrapper[4776]: I1011 10:37:37.804125 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 11 10:37:37.804777 master-2 kubenswrapper[4776]: I1011 10:37:37.804137 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 11 10:37:37.807903 master-2 kubenswrapper[4776]: I1011 10:37:37.807880 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 11 10:37:37.808075 master-2 kubenswrapper[4776]: I1011 10:37:37.808052 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 11 10:37:37.808138 master-2 kubenswrapper[4776]: I1011 10:37:37.807897 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-279hr" Oct 11 10:37:37.808180 master-2 kubenswrapper[4776]: I1011 10:37:37.808086 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 11 10:37:37.808251 master-2 kubenswrapper[4776]: I1011 10:37:37.808129 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:37:37.811294 master-2 kubenswrapper[4776]: I1011 10:37:37.811218 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 11 10:37:37.814127 master-2 kubenswrapper[4776]: I1011 10:37:37.814062 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 11 10:37:37.814833 master-2 kubenswrapper[4776]: I1011 10:37:37.814795 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Oct 11 10:37:37.824820 master-2 kubenswrapper[4776]: I1011 10:37:37.824779 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 11 10:37:37.825011 master-2 kubenswrapper[4776]: I1011 10:37:37.824967 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 11 10:37:37.827173 master-2 kubenswrapper[4776]: I1011 10:37:37.827135 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 11 10:37:37.832406 master-2 kubenswrapper[4776]: I1011 10:37:37.832038 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Oct 11 10:37:37.836277 master-2 kubenswrapper[4776]: I1011 10:37:37.836229 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 11 10:37:37.837881 master-2 kubenswrapper[4776]: I1011 10:37:37.837850 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Oct 11 10:37:37.842956 master-2 kubenswrapper[4776]: I1011 10:37:37.842929 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 11 10:37:37.850067 master-2 kubenswrapper[4776]: I1011 10:37:37.849450 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 11 10:37:37.854098 master-2 kubenswrapper[4776]: I1011 10:37:37.852951 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Oct 11 10:37:37.866364 master-2 kubenswrapper[4776]: I1011 10:37:37.866322 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Oct 11 10:37:37.867925 master-2 kubenswrapper[4776]: I1011 10:37:37.867889 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Oct 11 10:37:37.869166 master-2 kubenswrapper[4776]: I1011 10:37:37.869102 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr"] Oct 11 10:37:37.870178 master-2 kubenswrapper[4776]: I1011 10:37:37.870149 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:37.872225 master-2 kubenswrapper[4776]: I1011 10:37:37.872169 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.872653 master-2 kubenswrapper[4776]: I1011 10:37:37.872625 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 11 10:37:37.872653 master-2 kubenswrapper[4776]: I1011 10:37:37.872631 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 11 10:37:37.872856 master-2 kubenswrapper[4776]: I1011 10:37:37.872704 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 11 10:37:37.872856 master-2 kubenswrapper[4776]: I1011 10:37:37.872821 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-65wvh" Oct 11 10:37:37.873511 master-2 kubenswrapper[4776]: I1011 10:37:37.873454 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Oct 11 10:37:37.883520 master-2 kubenswrapper[4776]: I1011 10:37:37.883475 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr"] Oct 11 10:37:37.884091 master-2 kubenswrapper[4776]: I1011 10:37:37.884059 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:37:37.886855 master-2 kubenswrapper[4776]: I1011 10:37:37.886821 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:37:37.887696 master-2 kubenswrapper[4776]: I1011 10:37:37.887315 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:37:37.887989 master-2 kubenswrapper[4776]: I1011 10:37:37.887962 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 11 10:37:37.895156 master-2 kubenswrapper[4776]: I1011 10:37:37.895106 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 11 10:37:37.901004 master-2 kubenswrapper[4776]: I1011 10:37:37.900974 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.906129 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.906500 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.907490 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfkq8\" (UniqueName: \"kubernetes.io/projected/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-kube-api-access-jfkq8\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.907554 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.907718 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-session\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.907747 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.907830 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.907980 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-service-ca\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908030 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d38d167f-f15f-4f7e-8717-46dc61374f4a-monitoring-plugin-cert\") pod \"monitoring-plugin-578f8b47b8-5qgnr\" (UID: \"d38d167f-f15f-4f7e-8717-46dc61374f4a\") " pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908208 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-error\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908347 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-dir\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908382 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908499 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-router-certs\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908528 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-login\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.908759 master-2 kubenswrapper[4776]: I1011 10:37:37.908697 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-policies\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:37.911216 master-2 kubenswrapper[4776]: I1011 10:37:37.911160 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 11 10:37:37.914513 master-2 kubenswrapper[4776]: I1011 10:37:37.914476 4776 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 11 10:37:37.915517 master-2 kubenswrapper[4776]: I1011 10:37:37.915477 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 11 10:37:37.922791 master-2 kubenswrapper[4776]: I1011 10:37:37.922753 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 11 10:37:37.924028 master-2 kubenswrapper[4776]: I1011 10:37:37.923972 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 11 10:37:37.932663 master-2 kubenswrapper[4776]: I1011 10:37:37.932618 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Oct 11 10:37:37.933909 master-2 kubenswrapper[4776]: I1011 10:37:37.933882 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Oct 11 10:37:37.943459 master-2 kubenswrapper[4776]: I1011 10:37:37.943425 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 11 10:37:37.943644 master-2 kubenswrapper[4776]: I1011 10:37:37.943611 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Oct 11 10:37:37.943874 master-2 kubenswrapper[4776]: I1011 10:37:37.943778 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Oct 11 10:37:37.943874 master-2 kubenswrapper[4776]: I1011 10:37:37.943785 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Oct 11 10:37:37.944005 master-2 kubenswrapper[4776]: I1011 10:37:37.943920 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 11 10:37:37.944049 master-2 kubenswrapper[4776]: I1011 10:37:37.944020 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 10:37:37.950692 master-2 kubenswrapper[4776]: I1011 10:37:37.950619 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 11 10:37:37.954647 master-2 kubenswrapper[4776]: I1011 10:37:37.954612 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 11 10:37:37.955415 master-2 kubenswrapper[4776]: I1011 10:37:37.955384 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 11 10:37:37.959997 master-2 kubenswrapper[4776]: I1011 10:37:37.959956 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 11 10:37:37.961943 master-2 kubenswrapper[4776]: I1011 10:37:37.961915 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 11 10:37:37.971875 master-2 kubenswrapper[4776]: I1011 10:37:37.971857 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 11 10:37:37.974704 master-2 kubenswrapper[4776]: I1011 10:37:37.974688 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 11 10:37:37.974838 master-2 kubenswrapper[4776]: I1011 10:37:37.974728 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:37:37.978929 master-2 kubenswrapper[4776]: I1011 10:37:37.978912 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:37:37.980694 master-2 kubenswrapper[4776]: I1011 10:37:37.980632 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Oct 11 10:37:37.989041 master-2 kubenswrapper[4776]: I1011 10:37:37.988995 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 11 10:37:37.989763 master-2 kubenswrapper[4776]: I1011 10:37:37.989542 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-vwjkz" Oct 11 10:37:37.991153 master-2 kubenswrapper[4776]: I1011 10:37:37.991135 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 10:37:37.993008 master-2 kubenswrapper[4776]: I1011 10:37:37.992995 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 11 10:37:38.002524 master-2 kubenswrapper[4776]: I1011 10:37:38.002451 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 11 10:37:38.010527 master-2 kubenswrapper[4776]: I1011 10:37:38.010469 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010597 master-2 kubenswrapper[4776]: I1011 10:37:38.010540 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-router-certs\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010597 master-2 kubenswrapper[4776]: I1011 10:37:38.010575 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-login\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010663 master-2 kubenswrapper[4776]: I1011 10:37:38.010607 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-policies\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010663 master-2 kubenswrapper[4776]: I1011 10:37:38.010637 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010663 master-2 kubenswrapper[4776]: I1011 10:37:38.010662 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfkq8\" (UniqueName: \"kubernetes.io/projected/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-kube-api-access-jfkq8\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010785 master-2 kubenswrapper[4776]: I1011 10:37:38.010706 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010785 master-2 kubenswrapper[4776]: I1011 10:37:38.010733 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-session\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010785 master-2 kubenswrapper[4776]: I1011 10:37:38.010759 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010785 master-2 kubenswrapper[4776]: I1011 10:37:38.010782 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010894 master-2 kubenswrapper[4776]: I1011 10:37:38.010801 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-service-ca\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010894 master-2 kubenswrapper[4776]: I1011 10:37:38.010842 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d38d167f-f15f-4f7e-8717-46dc61374f4a-monitoring-plugin-cert\") pod \"monitoring-plugin-578f8b47b8-5qgnr\" (UID: \"d38d167f-f15f-4f7e-8717-46dc61374f4a\") " pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:38.010894 master-2 kubenswrapper[4776]: I1011 10:37:38.010871 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-error\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.010984 master-2 kubenswrapper[4776]: I1011 10:37:38.010895 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-dir\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.011047 master-2 kubenswrapper[4776]: I1011 10:37:38.011009 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-dir\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.013104 master-2 kubenswrapper[4776]: I1011 10:37:38.013051 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-policies\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.013104 master-2 kubenswrapper[4776]: I1011 10:37:38.013097 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.013794 master-2 kubenswrapper[4776]: I1011 10:37:38.013760 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-service-ca\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.015260 master-2 kubenswrapper[4776]: I1011 10:37:38.014371 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.015260 master-2 kubenswrapper[4776]: I1011 10:37:38.015156 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.015260 master-2 kubenswrapper[4776]: I1011 10:37:38.015201 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.015698 master-2 kubenswrapper[4776]: I1011 10:37:38.015649 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-login\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.015855 master-2 kubenswrapper[4776]: I1011 10:37:38.015818 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.016525 master-2 kubenswrapper[4776]: I1011 10:37:38.016488 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/d38d167f-f15f-4f7e-8717-46dc61374f4a-monitoring-plugin-cert\") pod \"monitoring-plugin-578f8b47b8-5qgnr\" (UID: \"d38d167f-f15f-4f7e-8717-46dc61374f4a\") " pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:38.020579 master-2 kubenswrapper[4776]: I1011 10:37:38.020538 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-error\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.020721 master-2 kubenswrapper[4776]: I1011 10:37:38.020625 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-session\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.024614 master-2 kubenswrapper[4776]: I1011 10:37:38.024564 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 11 10:37:38.026494 master-2 kubenswrapper[4776]: I1011 10:37:38.026447 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-router-certs\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.044926 master-2 kubenswrapper[4776]: I1011 10:37:38.042426 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 11 10:37:38.082557 master-2 kubenswrapper[4776]: I1011 10:37:38.082488 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 11 10:37:38.089287 master-2 kubenswrapper[4776]: I1011 10:37:38.089249 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfkq8\" (UniqueName: \"kubernetes.io/projected/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-kube-api-access-jfkq8\") pod \"oauth-openshift-68fb97bcc4-r24pr\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.101691 master-2 kubenswrapper[4776]: I1011 10:37:38.101646 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 10:37:38.120870 master-2 kubenswrapper[4776]: I1011 10:37:38.120792 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:38.121646 master-2 kubenswrapper[4776]: I1011 10:37:38.121612 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 11 10:37:38.142373 master-2 kubenswrapper[4776]: I1011 10:37:38.142225 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 11 10:37:38.164958 master-2 kubenswrapper[4776]: I1011 10:37:38.164895 4776 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 11 10:37:38.182464 master-2 kubenswrapper[4776]: I1011 10:37:38.182421 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Oct 11 10:37:38.199121 master-2 kubenswrapper[4776]: I1011 10:37:38.198959 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:38.200957 master-2 kubenswrapper[4776]: I1011 10:37:38.200931 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Oct 11 10:37:38.236704 master-2 kubenswrapper[4776]: I1011 10:37:38.228915 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 11 10:37:38.250699 master-2 kubenswrapper[4776]: I1011 10:37:38.244034 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Oct 11 10:37:38.270128 master-2 kubenswrapper[4776]: I1011 10:37:38.270078 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 11 10:37:38.290696 master-2 kubenswrapper[4776]: I1011 10:37:38.283375 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 11 10:37:38.324775 master-2 kubenswrapper[4776]: I1011 10:37:38.318907 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 11 10:37:38.340712 master-2 kubenswrapper[4776]: I1011 10:37:38.333284 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Oct 11 10:37:38.353419 master-2 kubenswrapper[4776]: I1011 10:37:38.346425 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 11 10:37:38.368570 master-2 kubenswrapper[4776]: I1011 10:37:38.367001 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 11 10:37:38.383876 master-2 kubenswrapper[4776]: I1011 10:37:38.383309 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 11 10:37:38.410377 master-2 kubenswrapper[4776]: I1011 10:37:38.410326 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Oct 11 10:37:38.424609 master-2 kubenswrapper[4776]: I1011 10:37:38.423103 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Oct 11 10:37:38.441949 master-2 kubenswrapper[4776]: I1011 10:37:38.441866 4776 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 11 10:37:38.462143 master-2 kubenswrapper[4776]: I1011 10:37:38.462108 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 11 10:37:38.496690 master-2 kubenswrapper[4776]: I1011 10:37:38.488950 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 11 10:37:38.515690 master-2 kubenswrapper[4776]: I1011 10:37:38.504053 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 11 10:37:38.532985 master-2 kubenswrapper[4776]: I1011 10:37:38.530207 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Oct 11 10:37:38.545248 master-2 kubenswrapper[4776]: I1011 10:37:38.545201 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 11 10:37:38.562973 master-2 kubenswrapper[4776]: I1011 10:37:38.562939 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Oct 11 10:37:38.582344 master-2 kubenswrapper[4776]: I1011 10:37:38.582230 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Oct 11 10:37:38.602957 master-2 kubenswrapper[4776]: I1011 10:37:38.602374 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 11 10:37:38.624075 master-2 kubenswrapper[4776]: I1011 10:37:38.623845 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 11 10:37:38.624525 master-2 kubenswrapper[4776]: I1011 10:37:38.624482 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr"] Oct 11 10:37:38.633957 master-2 kubenswrapper[4776]: I1011 10:37:38.633932 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:37:38.644999 master-2 kubenswrapper[4776]: I1011 10:37:38.644953 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 11 10:37:38.661854 master-2 kubenswrapper[4776]: I1011 10:37:38.661819 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 11 10:37:38.682854 master-2 kubenswrapper[4776]: I1011 10:37:38.682794 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ap7ej74ueigk4" Oct 11 10:37:38.703934 master-2 kubenswrapper[4776]: I1011 10:37:38.703844 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Oct 11 10:37:38.704571 master-2 kubenswrapper[4776]: I1011 10:37:38.704535 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-r24pr"] Oct 11 10:37:38.712293 master-2 kubenswrapper[4776]: W1011 10:37:38.712244 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd9ad6e0_e85a_41fb_a5cf_a8abeb46f369.slice/crio-fb05ca004bb431ae259dec9c7bc562a3772d43d4b0ba3d1a323b0aee4334c90e WatchSource:0}: Error finding container fb05ca004bb431ae259dec9c7bc562a3772d43d4b0ba3d1a323b0aee4334c90e: Status 404 returned error can't find the container with id fb05ca004bb431ae259dec9c7bc562a3772d43d4b0ba3d1a323b0aee4334c90e Oct 11 10:37:38.721602 master-2 kubenswrapper[4776]: I1011 10:37:38.721551 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Oct 11 10:37:38.741714 master-2 kubenswrapper[4776]: I1011 10:37:38.741655 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 11 10:37:38.761925 master-2 kubenswrapper[4776]: I1011 10:37:38.761870 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Oct 11 10:37:38.789268 master-2 kubenswrapper[4776]: I1011 10:37:38.789183 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 11 10:37:38.802742 master-2 kubenswrapper[4776]: I1011 10:37:38.802647 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 11 10:37:38.822010 master-2 kubenswrapper[4776]: I1011 10:37:38.821969 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 11 10:37:38.828498 master-2 kubenswrapper[4776]: E1011 10:37:38.828462 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:37:38.828766 master-2 kubenswrapper[4776]: E1011 10:37:38.828752 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:39:40.828731834 +0000 UTC m=+815.613158543 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:37:38.844669 master-2 kubenswrapper[4776]: I1011 10:37:38.844511 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"openshift-service-ca.crt" Oct 11 10:37:38.862667 master-2 kubenswrapper[4776]: I1011 10:37:38.862608 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 11 10:37:38.881942 master-2 kubenswrapper[4776]: I1011 10:37:38.881896 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 11 10:37:38.902122 master-2 kubenswrapper[4776]: I1011 10:37:38.902087 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Oct 11 10:37:38.921995 master-2 kubenswrapper[4776]: I1011 10:37:38.921955 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Oct 11 10:37:38.937019 master-2 kubenswrapper[4776]: I1011 10:37:38.936987 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" event={"ID":"d38d167f-f15f-4f7e-8717-46dc61374f4a","Type":"ContainerStarted","Data":"699d77bb6cc6ab9230123cac0b247370d49a03113205a2f77e3698ef1dd861e4"} Oct 11 10:37:38.938471 master-2 kubenswrapper[4776]: I1011 10:37:38.938450 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" event={"ID":"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369","Type":"ContainerStarted","Data":"fb05ca004bb431ae259dec9c7bc562a3772d43d4b0ba3d1a323b0aee4334c90e"} Oct 11 10:37:38.941397 master-2 kubenswrapper[4776]: I1011 10:37:38.941375 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 11 10:37:38.961335 master-2 kubenswrapper[4776]: I1011 10:37:38.961269 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 11 10:37:38.981504 master-2 kubenswrapper[4776]: I1011 10:37:38.981474 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 11 10:37:39.001947 master-2 kubenswrapper[4776]: I1011 10:37:39.001918 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 11 10:37:39.022557 master-2 kubenswrapper[4776]: I1011 10:37:39.022519 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 11 10:37:39.042826 master-2 kubenswrapper[4776]: I1011 10:37:39.042790 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 11 10:37:39.062869 master-2 kubenswrapper[4776]: I1011 10:37:39.062812 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 11 10:37:39.082078 master-2 kubenswrapper[4776]: I1011 10:37:39.082024 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 11 10:37:39.101919 master-2 kubenswrapper[4776]: I1011 10:37:39.101632 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Oct 11 10:37:39.128284 master-2 kubenswrapper[4776]: I1011 10:37:39.128239 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 11 10:37:39.142220 master-2 kubenswrapper[4776]: I1011 10:37:39.142145 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 11 10:37:39.161913 master-2 kubenswrapper[4776]: I1011 10:37:39.161606 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 11 10:37:39.183760 master-2 kubenswrapper[4776]: I1011 10:37:39.181808 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:37:39.201981 master-2 kubenswrapper[4776]: I1011 10:37:39.201921 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Oct 11 10:37:39.222923 master-2 kubenswrapper[4776]: I1011 10:37:39.222865 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:37:39.240030 master-2 kubenswrapper[4776]: I1011 10:37:39.239957 4776 request.go:700] Waited for 1.010841169s due to client-side throttling, not priority and fairness, request: GET:https://api-int.ocp.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=15510 Oct 11 10:37:39.242686 master-2 kubenswrapper[4776]: I1011 10:37:39.242623 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 11 10:37:39.262040 master-2 kubenswrapper[4776]: I1011 10:37:39.261993 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 11 10:37:39.282985 master-2 kubenswrapper[4776]: I1011 10:37:39.282956 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Oct 11 10:37:39.302036 master-2 kubenswrapper[4776]: I1011 10:37:39.301955 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 11 10:37:39.330035 master-2 kubenswrapper[4776]: I1011 10:37:39.329998 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Oct 11 10:37:39.342295 master-2 kubenswrapper[4776]: I1011 10:37:39.342250 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Oct 11 10:37:39.362875 master-2 kubenswrapper[4776]: I1011 10:37:39.361938 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 10:37:39.381893 master-2 kubenswrapper[4776]: I1011 10:37:39.381842 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 11 10:37:39.401043 master-2 kubenswrapper[4776]: I1011 10:37:39.401007 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 11 10:37:39.421772 master-2 kubenswrapper[4776]: I1011 10:37:39.421719 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Oct 11 10:37:39.442208 master-2 kubenswrapper[4776]: I1011 10:37:39.441979 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Oct 11 10:37:39.461664 master-2 kubenswrapper[4776]: I1011 10:37:39.461629 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 10:37:39.487151 master-2 kubenswrapper[4776]: I1011 10:37:39.487093 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 10:37:39.501838 master-2 kubenswrapper[4776]: I1011 10:37:39.501771 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-5v5km" Oct 11 10:37:39.521400 master-2 kubenswrapper[4776]: I1011 10:37:39.521359 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Oct 11 10:37:39.542997 master-2 kubenswrapper[4776]: I1011 10:37:39.542811 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 11 10:37:39.562238 master-2 kubenswrapper[4776]: I1011 10:37:39.562108 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 11 10:37:39.582455 master-2 kubenswrapper[4776]: I1011 10:37:39.582299 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 11 10:37:39.602389 master-2 kubenswrapper[4776]: I1011 10:37:39.602329 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 11 10:37:39.623083 master-2 kubenswrapper[4776]: I1011 10:37:39.622862 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 11 10:37:39.642600 master-2 kubenswrapper[4776]: I1011 10:37:39.642533 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 11 10:37:39.673201 master-2 kubenswrapper[4776]: I1011 10:37:39.661761 4776 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 11 10:37:39.687199 master-2 kubenswrapper[4776]: I1011 10:37:39.686947 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 11 10:37:39.702340 master-2 kubenswrapper[4776]: I1011 10:37:39.702136 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 11 10:37:39.722322 master-2 kubenswrapper[4776]: I1011 10:37:39.722206 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:37:39.742743 master-2 kubenswrapper[4776]: I1011 10:37:39.742696 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:37:39.761689 master-2 kubenswrapper[4776]: I1011 10:37:39.761631 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 10:37:39.782171 master-2 kubenswrapper[4776]: I1011 10:37:39.782132 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:37:39.803214 master-2 kubenswrapper[4776]: I1011 10:37:39.803138 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 11 10:37:39.822826 master-2 kubenswrapper[4776]: I1011 10:37:39.822779 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 11 10:37:39.841774 master-2 kubenswrapper[4776]: I1011 10:37:39.841703 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 11 10:37:39.861869 master-2 kubenswrapper[4776]: I1011 10:37:39.861797 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Oct 11 10:37:39.882926 master-2 kubenswrapper[4776]: I1011 10:37:39.882813 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 11 10:37:39.902884 master-2 kubenswrapper[4776]: I1011 10:37:39.902844 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 10:37:39.921919 master-2 kubenswrapper[4776]: I1011 10:37:39.921648 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 11 10:37:39.945991 master-2 kubenswrapper[4776]: I1011 10:37:39.942016 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 11 10:37:39.962529 master-2 kubenswrapper[4776]: I1011 10:37:39.962364 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 11 10:37:39.982723 master-2 kubenswrapper[4776]: I1011 10:37:39.981707 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Oct 11 10:37:40.002567 master-2 kubenswrapper[4776]: I1011 10:37:40.002488 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 11 10:37:40.028528 master-2 kubenswrapper[4776]: I1011 10:37:40.028482 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 11 10:37:40.041975 master-2 kubenswrapper[4776]: I1011 10:37:40.041928 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 10:37:40.061615 master-2 kubenswrapper[4776]: I1011 10:37:40.061535 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 11 10:37:40.082357 master-2 kubenswrapper[4776]: I1011 10:37:40.082268 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 10:37:40.101357 master-2 kubenswrapper[4776]: I1011 10:37:40.101321 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 11 10:37:40.122374 master-2 kubenswrapper[4776]: I1011 10:37:40.122328 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 11 10:37:40.146278 master-2 kubenswrapper[4776]: I1011 10:37:40.146245 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 10:37:40.161594 master-2 kubenswrapper[4776]: I1011 10:37:40.161546 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:37:40.181451 master-2 kubenswrapper[4776]: I1011 10:37:40.181341 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 11 10:37:40.202324 master-2 kubenswrapper[4776]: I1011 10:37:40.202267 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 11 10:37:40.222599 master-2 kubenswrapper[4776]: I1011 10:37:40.222540 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"openshift-service-ca.crt" Oct 11 10:37:40.241798 master-2 kubenswrapper[4776]: I1011 10:37:40.241733 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 11 10:37:40.443863 master-2 kubenswrapper[4776]: I1011 10:37:40.443767 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-l7mc2"] Oct 11 10:37:40.444251 master-2 kubenswrapper[4776]: I1011 10:37:40.444152 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" podUID="01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" containerName="controller-manager" containerID="cri-o://15dc2b1cfde423b619736343b47ca9dd39eca021477b309bd20f1ac3429f0eac" gracePeriod=30 Oct 11 10:37:40.486796 master-2 kubenswrapper[4776]: I1011 10:37:40.486744 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5"] Oct 11 10:37:40.487027 master-2 kubenswrapper[4776]: I1011 10:37:40.486954 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" podUID="97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" containerName="route-controller-manager" containerID="cri-o://9278bc0b71056762d1acbc3fed930331878f846bd5deefeb8a2b904499d18eb2" gracePeriod=30 Oct 11 10:37:40.963643 master-2 kubenswrapper[4776]: I1011 10:37:40.963585 4776 generic.go:334] "Generic (PLEG): container finished" podID="01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" containerID="15dc2b1cfde423b619736343b47ca9dd39eca021477b309bd20f1ac3429f0eac" exitCode=0 Oct 11 10:37:40.964265 master-2 kubenswrapper[4776]: I1011 10:37:40.963691 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" event={"ID":"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1","Type":"ContainerDied","Data":"15dc2b1cfde423b619736343b47ca9dd39eca021477b309bd20f1ac3429f0eac"} Oct 11 10:37:40.968165 master-2 kubenswrapper[4776]: I1011 10:37:40.967906 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" event={"ID":"d38d167f-f15f-4f7e-8717-46dc61374f4a","Type":"ContainerStarted","Data":"db161c13a5aef0f296c492266dc2b16da13ac806243a968407a23c107700ab11"} Oct 11 10:37:40.968727 master-2 kubenswrapper[4776]: I1011 10:37:40.968430 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:40.972044 master-2 kubenswrapper[4776]: I1011 10:37:40.971983 4776 generic.go:334] "Generic (PLEG): container finished" podID="97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" containerID="9278bc0b71056762d1acbc3fed930331878f846bd5deefeb8a2b904499d18eb2" exitCode=0 Oct 11 10:37:40.972113 master-2 kubenswrapper[4776]: I1011 10:37:40.972041 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" event={"ID":"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5","Type":"ContainerDied","Data":"9278bc0b71056762d1acbc3fed930331878f846bd5deefeb8a2b904499d18eb2"} Oct 11 10:37:40.977406 master-2 kubenswrapper[4776]: I1011 10:37:40.977366 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" Oct 11 10:37:40.994940 master-2 kubenswrapper[4776]: I1011 10:37:40.989433 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr" podStartSLOduration=2.573646337 podStartE2EDuration="3.989391978s" podCreationTimestamp="2025-10-11 10:37:37 +0000 UTC" firstStartedPulling="2025-10-11 10:37:38.633863365 +0000 UTC m=+693.418290074" lastFinishedPulling="2025-10-11 10:37:40.049609006 +0000 UTC m=+694.834035715" observedRunningTime="2025-10-11 10:37:40.988517085 +0000 UTC m=+695.772943814" watchObservedRunningTime="2025-10-11 10:37:40.989391978 +0000 UTC m=+695.773818707" Oct 11 10:37:41.207156 master-2 kubenswrapper[4776]: I1011 10:37:41.207104 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:37:41.214045 master-2 kubenswrapper[4776]: I1011 10:37:41.213997 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:37:41.273304 master-2 kubenswrapper[4776]: I1011 10:37:41.273241 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-client-ca\") pod \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " Oct 11 10:37:41.273540 master-2 kubenswrapper[4776]: I1011 10:37:41.273341 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-client-ca\") pod \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " Oct 11 10:37:41.273540 master-2 kubenswrapper[4776]: I1011 10:37:41.273368 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-serving-cert\") pod \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " Oct 11 10:37:41.273540 master-2 kubenswrapper[4776]: I1011 10:37:41.273411 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-kube-api-access-wkssx\") pod \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " Oct 11 10:37:41.273540 master-2 kubenswrapper[4776]: I1011 10:37:41.273445 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-config\") pod \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " Oct 11 10:37:41.273540 master-2 kubenswrapper[4776]: I1011 10:37:41.273503 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-serving-cert\") pod \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " Oct 11 10:37:41.273828 master-2 kubenswrapper[4776]: I1011 10:37:41.273542 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-proxy-ca-bundles\") pod \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " Oct 11 10:37:41.273828 master-2 kubenswrapper[4776]: I1011 10:37:41.273576 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-config\") pod \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\" (UID: \"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5\") " Oct 11 10:37:41.273828 master-2 kubenswrapper[4776]: I1011 10:37:41.273603 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mmxs\" (UniqueName: \"kubernetes.io/projected/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-kube-api-access-8mmxs\") pod \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\" (UID: \"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1\") " Oct 11 10:37:41.276564 master-2 kubenswrapper[4776]: I1011 10:37:41.276528 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-kube-api-access-8mmxs" (OuterVolumeSpecName: "kube-api-access-8mmxs") pod "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" (UID: "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1"). InnerVolumeSpecName "kube-api-access-8mmxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:37:41.276938 master-2 kubenswrapper[4776]: I1011 10:37:41.276908 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-client-ca" (OuterVolumeSpecName: "client-ca") pod "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" (UID: "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:41.277273 master-2 kubenswrapper[4776]: I1011 10:37:41.277248 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" (UID: "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:41.278256 master-2 kubenswrapper[4776]: I1011 10:37:41.278233 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" (UID: "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:41.279194 master-2 kubenswrapper[4776]: I1011 10:37:41.279165 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" (UID: "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:37:41.279664 master-2 kubenswrapper[4776]: I1011 10:37:41.279595 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-config" (OuterVolumeSpecName: "config") pod "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" (UID: "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:41.280891 master-2 kubenswrapper[4776]: I1011 10:37:41.280870 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-kube-api-access-wkssx" (OuterVolumeSpecName: "kube-api-access-wkssx") pod "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" (UID: "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5"). InnerVolumeSpecName "kube-api-access-wkssx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:37:41.286177 master-2 kubenswrapper[4776]: I1011 10:37:41.286099 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" (UID: "01d9e8ba-5d12-4d58-8db6-dbbea31c4df1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:37:41.287278 master-2 kubenswrapper[4776]: I1011 10:37:41.287230 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-config" (OuterVolumeSpecName: "config") pod "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" (UID: "97bde30f-16ad-44f5-ac26-9f0ba5ba74f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375115 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mmxs\" (UniqueName: \"kubernetes.io/projected/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-kube-api-access-8mmxs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375152 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-client-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375162 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-client-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375170 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375179 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkssx\" (UniqueName: \"kubernetes.io/projected/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-kube-api-access-wkssx\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375188 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375196 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375204 4776 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1-proxy-ca-bundles\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.375247 master-2 kubenswrapper[4776]: I1011 10:37:41.375212 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:37:41.664785 master-2 kubenswrapper[4776]: I1011 10:37:41.664718 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-897b595f-pt2b4"] Oct 11 10:37:41.664991 master-2 kubenswrapper[4776]: E1011 10:37:41.664962 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" containerName="controller-manager" Oct 11 10:37:41.664991 master-2 kubenswrapper[4776]: I1011 10:37:41.664974 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" containerName="controller-manager" Oct 11 10:37:41.664991 master-2 kubenswrapper[4776]: E1011 10:37:41.664991 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" containerName="route-controller-manager" Oct 11 10:37:41.664991 master-2 kubenswrapper[4776]: I1011 10:37:41.665018 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" containerName="route-controller-manager" Oct 11 10:37:41.665154 master-2 kubenswrapper[4776]: I1011 10:37:41.665113 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" containerName="controller-manager" Oct 11 10:37:41.665154 master-2 kubenswrapper[4776]: I1011 10:37:41.665132 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" containerName="route-controller-manager" Oct 11 10:37:41.665647 master-2 kubenswrapper[4776]: I1011 10:37:41.665601 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.667455 master-2 kubenswrapper[4776]: I1011 10:37:41.667407 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2"] Oct 11 10:37:41.667953 master-2 kubenswrapper[4776]: I1011 10:37:41.667907 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-z46lz" Oct 11 10:37:41.668170 master-2 kubenswrapper[4776]: I1011 10:37:41.668137 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.679867 master-2 kubenswrapper[4776]: I1011 10:37:41.679832 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-config\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.679988 master-2 kubenswrapper[4776]: I1011 10:37:41.679875 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-client-ca\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.679988 master-2 kubenswrapper[4776]: I1011 10:37:41.679906 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6baec280-854f-4a9d-b459-e6ccb1e67c12-serving-cert\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.679988 master-2 kubenswrapper[4776]: I1011 10:37:41.679938 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8cj9\" (UniqueName: \"kubernetes.io/projected/6baec280-854f-4a9d-b459-e6ccb1e67c12-kube-api-access-q8cj9\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.680200 master-2 kubenswrapper[4776]: I1011 10:37:41.680009 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ecc5770-3970-42d0-9773-d8be6fbf04a2-config\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.680200 master-2 kubenswrapper[4776]: I1011 10:37:41.680106 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-proxy-ca-bundles\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.680200 master-2 kubenswrapper[4776]: I1011 10:37:41.680147 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk4m7\" (UniqueName: \"kubernetes.io/projected/1ecc5770-3970-42d0-9773-d8be6fbf04a2-kube-api-access-lk4m7\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.680334 master-2 kubenswrapper[4776]: I1011 10:37:41.680265 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ecc5770-3970-42d0-9773-d8be6fbf04a2-serving-cert\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.680334 master-2 kubenswrapper[4776]: I1011 10:37:41.680312 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ecc5770-3970-42d0-9773-d8be6fbf04a2-client-ca\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.682294 master-2 kubenswrapper[4776]: I1011 10:37:41.682261 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-897b595f-pt2b4"] Oct 11 10:37:41.687240 master-2 kubenswrapper[4776]: I1011 10:37:41.687191 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2"] Oct 11 10:37:41.781614 master-2 kubenswrapper[4776]: I1011 10:37:41.781546 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8cj9\" (UniqueName: \"kubernetes.io/projected/6baec280-854f-4a9d-b459-e6ccb1e67c12-kube-api-access-q8cj9\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.781614 master-2 kubenswrapper[4776]: I1011 10:37:41.781614 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ecc5770-3970-42d0-9773-d8be6fbf04a2-config\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781642 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-proxy-ca-bundles\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781659 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk4m7\" (UniqueName: \"kubernetes.io/projected/1ecc5770-3970-42d0-9773-d8be6fbf04a2-kube-api-access-lk4m7\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781705 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ecc5770-3970-42d0-9773-d8be6fbf04a2-serving-cert\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781737 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ecc5770-3970-42d0-9773-d8be6fbf04a2-client-ca\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781775 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-client-ca\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781794 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-config\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.781872 master-2 kubenswrapper[4776]: I1011 10:37:41.781816 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6baec280-854f-4a9d-b459-e6ccb1e67c12-serving-cert\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.782777 master-2 kubenswrapper[4776]: I1011 10:37:41.782746 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-client-ca\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.783032 master-2 kubenswrapper[4776]: I1011 10:37:41.782979 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ecc5770-3970-42d0-9773-d8be6fbf04a2-config\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.783103 master-2 kubenswrapper[4776]: I1011 10:37:41.782984 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ecc5770-3970-42d0-9773-d8be6fbf04a2-client-ca\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.783354 master-2 kubenswrapper[4776]: I1011 10:37:41.783317 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-proxy-ca-bundles\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.785742 master-2 kubenswrapper[4776]: I1011 10:37:41.785710 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ecc5770-3970-42d0-9773-d8be6fbf04a2-serving-cert\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.785854 master-2 kubenswrapper[4776]: I1011 10:37:41.785821 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6baec280-854f-4a9d-b459-e6ccb1e67c12-serving-cert\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.786536 master-2 kubenswrapper[4776]: I1011 10:37:41.786505 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6baec280-854f-4a9d-b459-e6ccb1e67c12-config\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.800387 master-2 kubenswrapper[4776]: I1011 10:37:41.800347 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk4m7\" (UniqueName: \"kubernetes.io/projected/1ecc5770-3970-42d0-9773-d8be6fbf04a2-kube-api-access-lk4m7\") pod \"route-controller-manager-57c8488cd7-d5ck2\" (UID: \"1ecc5770-3970-42d0-9773-d8be6fbf04a2\") " pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.802773 master-2 kubenswrapper[4776]: I1011 10:37:41.802666 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8cj9\" (UniqueName: \"kubernetes.io/projected/6baec280-854f-4a9d-b459-e6ccb1e67c12-kube-api-access-q8cj9\") pod \"controller-manager-897b595f-pt2b4\" (UID: \"6baec280-854f-4a9d-b459-e6ccb1e67c12\") " pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.981088 master-2 kubenswrapper[4776]: I1011 10:37:41.980957 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" event={"ID":"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369","Type":"ContainerStarted","Data":"4c53ab08cf4b5166d95c57913daeef16e08566e476b981ef95245c117bb87d6a"} Oct 11 10:37:41.981695 master-2 kubenswrapper[4776]: I1011 10:37:41.981311 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:41.981862 master-2 kubenswrapper[4776]: I1011 10:37:41.981811 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:41.985384 master-2 kubenswrapper[4776]: I1011 10:37:41.985334 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" event={"ID":"01d9e8ba-5d12-4d58-8db6-dbbea31c4df1","Type":"ContainerDied","Data":"bc433b79c55289ef4a64be58481abb43a7688c174b82b2fdfa0d85577d07edb1"} Oct 11 10:37:41.985384 master-2 kubenswrapper[4776]: I1011 10:37:41.985366 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7855cb4-l7mc2" Oct 11 10:37:41.985522 master-2 kubenswrapper[4776]: I1011 10:37:41.985404 4776 scope.go:117] "RemoveContainer" containerID="15dc2b1cfde423b619736343b47ca9dd39eca021477b309bd20f1ac3429f0eac" Oct 11 10:37:41.988723 master-2 kubenswrapper[4776]: I1011 10:37:41.988683 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:41.991202 master-2 kubenswrapper[4776]: I1011 10:37:41.991113 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" event={"ID":"97bde30f-16ad-44f5-ac26-9f0ba5ba74f5","Type":"ContainerDied","Data":"417196648a53013bc23124ecf6d1bf221fe3949e2ac324623e298e42a8c1ca2b"} Oct 11 10:37:41.991202 master-2 kubenswrapper[4776]: I1011 10:37:41.991163 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5" Oct 11 10:37:42.002772 master-2 kubenswrapper[4776]: I1011 10:37:42.002696 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:37:42.015893 master-2 kubenswrapper[4776]: I1011 10:37:42.015830 4776 scope.go:117] "RemoveContainer" containerID="9278bc0b71056762d1acbc3fed930331878f846bd5deefeb8a2b904499d18eb2" Oct 11 10:37:42.042588 master-2 kubenswrapper[4776]: I1011 10:37:42.042483 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" podStartSLOduration=7.871011364 podStartE2EDuration="10.042450424s" podCreationTimestamp="2025-10-11 10:37:32 +0000 UTC" firstStartedPulling="2025-10-11 10:37:38.714669179 +0000 UTC m=+693.499095888" lastFinishedPulling="2025-10-11 10:37:40.886108249 +0000 UTC m=+695.670534948" observedRunningTime="2025-10-11 10:37:42.013696706 +0000 UTC m=+696.798123435" watchObservedRunningTime="2025-10-11 10:37:42.042450424 +0000 UTC m=+696.826877143" Oct 11 10:37:42.043556 master-2 kubenswrapper[4776]: I1011 10:37:42.043463 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5"] Oct 11 10:37:42.100141 master-2 kubenswrapper[4776]: I1011 10:37:42.100070 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5"] Oct 11 10:37:42.107874 master-2 kubenswrapper[4776]: I1011 10:37:42.106295 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-l7mc2"] Oct 11 10:37:42.113426 master-2 kubenswrapper[4776]: I1011 10:37:42.113388 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77c7855cb4-l7mc2"] Oct 11 10:37:42.462885 master-2 kubenswrapper[4776]: I1011 10:37:42.462816 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-897b595f-pt2b4"] Oct 11 10:37:42.688139 master-2 kubenswrapper[4776]: I1011 10:37:42.688090 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2"] Oct 11 10:37:42.690867 master-2 kubenswrapper[4776]: W1011 10:37:42.690826 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ecc5770_3970_42d0_9773_d8be6fbf04a2.slice/crio-ae5e0557bdd17ed8ab94916dd413b8aab5aae6d00f7649f0d7bf75739d671dd2 WatchSource:0}: Error finding container ae5e0557bdd17ed8ab94916dd413b8aab5aae6d00f7649f0d7bf75739d671dd2: Status 404 returned error can't find the container with id ae5e0557bdd17ed8ab94916dd413b8aab5aae6d00f7649f0d7bf75739d671dd2 Oct 11 10:37:42.999096 master-2 kubenswrapper[4776]: I1011 10:37:42.998975 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" event={"ID":"1ecc5770-3970-42d0-9773-d8be6fbf04a2","Type":"ContainerStarted","Data":"1b7ce82538690cb89e2d7e9f0d406a630bf93e0be90c1ad442461141eb831682"} Oct 11 10:37:42.999096 master-2 kubenswrapper[4776]: I1011 10:37:42.999031 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" event={"ID":"1ecc5770-3970-42d0-9773-d8be6fbf04a2","Type":"ContainerStarted","Data":"ae5e0557bdd17ed8ab94916dd413b8aab5aae6d00f7649f0d7bf75739d671dd2"} Oct 11 10:37:42.999626 master-2 kubenswrapper[4776]: I1011 10:37:42.999492 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:43.002208 master-2 kubenswrapper[4776]: I1011 10:37:43.002154 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" event={"ID":"6baec280-854f-4a9d-b459-e6ccb1e67c12","Type":"ContainerStarted","Data":"eab4810a3e0e55ee6510b9546f4aaa044d83c3d3f1504fdc228b1fa68c5f7ca8"} Oct 11 10:37:43.002292 master-2 kubenswrapper[4776]: I1011 10:37:43.002209 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" event={"ID":"6baec280-854f-4a9d-b459-e6ccb1e67c12","Type":"ContainerStarted","Data":"26793ae6f40b7ef7a71be09ae2079c1aa7d005227d8bda4b4bd0254701c1775d"} Oct 11 10:37:43.002399 master-2 kubenswrapper[4776]: I1011 10:37:43.002355 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:43.016171 master-2 kubenswrapper[4776]: I1011 10:37:43.016129 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" Oct 11 10:37:43.036952 master-2 kubenswrapper[4776]: I1011 10:37:43.036869 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" podStartSLOduration=3.036849632 podStartE2EDuration="3.036849632s" podCreationTimestamp="2025-10-11 10:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:37:43.03274506 +0000 UTC m=+697.817171779" watchObservedRunningTime="2025-10-11 10:37:43.036849632 +0000 UTC m=+697.821276341" Oct 11 10:37:43.073507 master-2 kubenswrapper[4776]: I1011 10:37:43.073339 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-897b595f-pt2b4" podStartSLOduration=3.073311801 podStartE2EDuration="3.073311801s" podCreationTimestamp="2025-10-11 10:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:37:43.072358115 +0000 UTC m=+697.856784824" watchObservedRunningTime="2025-10-11 10:37:43.073311801 +0000 UTC m=+697.857738510" Oct 11 10:37:43.234927 master-2 kubenswrapper[4776]: I1011 10:37:43.234862 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2" Oct 11 10:37:43.567209 master-2 kubenswrapper[4776]: I1011 10:37:43.567152 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-65bb9777fc-bkmsm"] Oct 11 10:37:43.567864 master-2 kubenswrapper[4776]: I1011 10:37:43.567834 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:37:43.573042 master-2 kubenswrapper[4776]: I1011 10:37:43.572997 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-pqtx8" Oct 11 10:37:43.573127 master-2 kubenswrapper[4776]: I1011 10:37:43.573015 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 11 10:37:43.573838 master-2 kubenswrapper[4776]: I1011 10:37:43.573796 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 11 10:37:43.618699 master-2 kubenswrapper[4776]: I1011 10:37:43.618604 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572jp\" (UniqueName: \"kubernetes.io/projected/31d64616-a514-4ae3-bb6d-d6eb14d9147a-kube-api-access-572jp\") pod \"downloads-65bb9777fc-bkmsm\" (UID: \"31d64616-a514-4ae3-bb6d-d6eb14d9147a\") " pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:37:43.695781 master-2 kubenswrapper[4776]: I1011 10:37:43.695718 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-65bb9777fc-bkmsm"] Oct 11 10:37:43.735209 master-2 kubenswrapper[4776]: I1011 10:37:43.735154 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-572jp\" (UniqueName: \"kubernetes.io/projected/31d64616-a514-4ae3-bb6d-d6eb14d9147a-kube-api-access-572jp\") pod \"downloads-65bb9777fc-bkmsm\" (UID: \"31d64616-a514-4ae3-bb6d-d6eb14d9147a\") " pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:37:43.806874 master-2 kubenswrapper[4776]: I1011 10:37:43.806767 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-572jp\" (UniqueName: \"kubernetes.io/projected/31d64616-a514-4ae3-bb6d-d6eb14d9147a-kube-api-access-572jp\") pod \"downloads-65bb9777fc-bkmsm\" (UID: \"31d64616-a514-4ae3-bb6d-d6eb14d9147a\") " pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:37:43.838853 master-2 kubenswrapper[4776]: I1011 10:37:43.838736 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-h5nlf"] Oct 11 10:37:43.839066 master-2 kubenswrapper[4776]: I1011 10:37:43.839025 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" containerID="cri-o://a1a9c7629f1fd873d3ec9d24b009ce28e04c1ae342e924bb98a2d69fd1fdcc5f" gracePeriod=120 Oct 11 10:37:43.839434 master-2 kubenswrapper[4776]: I1011 10:37:43.839401 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://33b8451dee3f8d5ed8e144b04e3c4757d199f647e9b246655c277be3cef812a5" gracePeriod=120 Oct 11 10:37:43.890709 master-2 kubenswrapper[4776]: I1011 10:37:43.887213 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:37:44.024833 master-2 kubenswrapper[4776]: I1011 10:37:44.024769 4776 generic.go:334] "Generic (PLEG): container finished" podID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerID="33b8451dee3f8d5ed8e144b04e3c4757d199f647e9b246655c277be3cef812a5" exitCode=0 Oct 11 10:37:44.025507 master-2 kubenswrapper[4776]: I1011 10:37:44.024901 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerDied","Data":"33b8451dee3f8d5ed8e144b04e3c4757d199f647e9b246655c277be3cef812a5"} Oct 11 10:37:44.068541 master-2 kubenswrapper[4776]: I1011 10:37:44.068473 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d9e8ba-5d12-4d58-8db6-dbbea31c4df1" path="/var/lib/kubelet/pods/01d9e8ba-5d12-4d58-8db6-dbbea31c4df1/volumes" Oct 11 10:37:44.069036 master-2 kubenswrapper[4776]: I1011 10:37:44.069000 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97bde30f-16ad-44f5-ac26-9f0ba5ba74f5" path="/var/lib/kubelet/pods/97bde30f-16ad-44f5-ac26-9f0ba5ba74f5/volumes" Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: I1011 10:37:44.200113 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:44.202271 master-2 kubenswrapper[4776]: I1011 10:37:44.200202 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:44.387696 master-2 kubenswrapper[4776]: I1011 10:37:44.387252 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-65bb9777fc-bkmsm"] Oct 11 10:37:44.398432 master-2 kubenswrapper[4776]: W1011 10:37:44.398348 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31d64616_a514_4ae3_bb6d_d6eb14d9147a.slice/crio-724b736396538af81d05db3a2d73778b58761c2035bea9a2c55b66d74150f1f4 WatchSource:0}: Error finding container 724b736396538af81d05db3a2d73778b58761c2035bea9a2c55b66d74150f1f4: Status 404 returned error can't find the container with id 724b736396538af81d05db3a2d73778b58761c2035bea9a2c55b66d74150f1f4 Oct 11 10:37:45.030205 master-2 kubenswrapper[4776]: I1011 10:37:45.030018 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65bb9777fc-bkmsm" event={"ID":"31d64616-a514-4ae3-bb6d-d6eb14d9147a","Type":"ContainerStarted","Data":"724b736396538af81d05db3a2d73778b58761c2035bea9a2c55b66d74150f1f4"} Oct 11 10:37:45.031744 master-2 kubenswrapper[4776]: I1011 10:37:45.031702 4776 generic.go:334] "Generic (PLEG): container finished" podID="e540333c-4b4d-439e-a82a-cd3a97c95a43" containerID="89704c12769118c53c22d7f82d393e22678a4835f23d73f837dd13b143b58cd8" exitCode=0 Oct 11 10:37:45.031844 master-2 kubenswrapper[4776]: I1011 10:37:45.031776 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" event={"ID":"e540333c-4b4d-439e-a82a-cd3a97c95a43","Type":"ContainerDied","Data":"89704c12769118c53c22d7f82d393e22678a4835f23d73f837dd13b143b58cd8"} Oct 11 10:37:45.031844 master-2 kubenswrapper[4776]: I1011 10:37:45.031809 4776 scope.go:117] "RemoveContainer" containerID="0ba5d510196688f6b97d8a36964cc97a744fb54a3c5e03a38ad0b42712671103" Oct 11 10:37:45.032361 master-2 kubenswrapper[4776]: I1011 10:37:45.032303 4776 scope.go:117] "RemoveContainer" containerID="89704c12769118c53c22d7f82d393e22678a4835f23d73f837dd13b143b58cd8" Oct 11 10:37:45.032578 master-2 kubenswrapper[4776]: E1011 10:37:45.032482 4776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-storage-operator pod=cluster-storage-operator-56d4b95494-9fbb2_openshift-cluster-storage-operator(e540333c-4b4d-439e-a82a-cd3a97c95a43)\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" podUID="e540333c-4b4d-439e-a82a-cd3a97c95a43" Oct 11 10:37:46.860970 master-2 kubenswrapper[4776]: I1011 10:37:46.860915 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57bccbfdf6-2s9dn"] Oct 11 10:37:46.861798 master-2 kubenswrapper[4776]: I1011 10:37:46.861751 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.864411 master-2 kubenswrapper[4776]: I1011 10:37:46.864368 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8wmjp" Oct 11 10:37:46.864560 master-2 kubenswrapper[4776]: I1011 10:37:46.864532 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 11 10:37:46.864775 master-2 kubenswrapper[4776]: I1011 10:37:46.864686 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 11 10:37:46.864926 master-2 kubenswrapper[4776]: I1011 10:37:46.864822 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 11 10:37:46.865311 master-2 kubenswrapper[4776]: I1011 10:37:46.865280 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 11 10:37:46.865443 master-2 kubenswrapper[4776]: I1011 10:37:46.865417 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 11 10:37:46.886261 master-2 kubenswrapper[4776]: I1011 10:37:46.886214 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57bccbfdf6-2s9dn"] Oct 11 10:37:46.923799 master-2 kubenswrapper[4776]: I1011 10:37:46.923728 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-oauth-serving-cert\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.924078 master-2 kubenswrapper[4776]: I1011 10:37:46.924058 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-service-ca\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.924411 master-2 kubenswrapper[4776]: I1011 10:37:46.924328 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-oauth-config\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.924478 master-2 kubenswrapper[4776]: I1011 10:37:46.924452 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rz6c\" (UniqueName: \"kubernetes.io/projected/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-kube-api-access-6rz6c\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.924703 master-2 kubenswrapper[4776]: I1011 10:37:46.924657 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-config\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:46.925004 master-2 kubenswrapper[4776]: I1011 10:37:46.924926 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-serving-cert\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.028867 master-2 kubenswrapper[4776]: I1011 10:37:47.028815 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-config\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.028867 master-2 kubenswrapper[4776]: I1011 10:37:47.028864 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-serving-cert\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.029144 master-2 kubenswrapper[4776]: I1011 10:37:47.028900 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-oauth-serving-cert\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.029144 master-2 kubenswrapper[4776]: I1011 10:37:47.028926 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-service-ca\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.029144 master-2 kubenswrapper[4776]: I1011 10:37:47.028957 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-oauth-config\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.029144 master-2 kubenswrapper[4776]: I1011 10:37:47.028971 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rz6c\" (UniqueName: \"kubernetes.io/projected/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-kube-api-access-6rz6c\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.030272 master-2 kubenswrapper[4776]: I1011 10:37:47.030253 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-oauth-serving-cert\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.030487 master-2 kubenswrapper[4776]: I1011 10:37:47.030387 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-config\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.030687 master-2 kubenswrapper[4776]: I1011 10:37:47.030622 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-service-ca\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.033884 master-2 kubenswrapper[4776]: I1011 10:37:47.033847 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-oauth-config\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.034382 master-2 kubenswrapper[4776]: I1011 10:37:47.034351 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-serving-cert\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.077629 master-2 kubenswrapper[4776]: I1011 10:37:47.077522 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rz6c\" (UniqueName: \"kubernetes.io/projected/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-kube-api-access-6rz6c\") pod \"console-57bccbfdf6-2s9dn\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.182722 master-2 kubenswrapper[4776]: I1011 10:37:47.181235 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:47.643774 master-2 kubenswrapper[4776]: I1011 10:37:47.643728 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57bccbfdf6-2s9dn"] Oct 11 10:37:47.644690 master-2 kubenswrapper[4776]: W1011 10:37:47.644637 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae5dca9f_dad3_4712_86f9_3a3e537b5c99.slice/crio-96abc13c3c5fa7d6b33609ba42f139af90fc31f27879c88e7021022b86b662c8 WatchSource:0}: Error finding container 96abc13c3c5fa7d6b33609ba42f139af90fc31f27879c88e7021022b86b662c8: Status 404 returned error can't find the container with id 96abc13c3c5fa7d6b33609ba42f139af90fc31f27879c88e7021022b86b662c8 Oct 11 10:37:47.845716 master-2 kubenswrapper[4776]: I1011 10:37:47.841887 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729"] Oct 11 10:37:47.845716 master-2 kubenswrapper[4776]: I1011 10:37:47.842093 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" containerID="cri-o://6dd550d507d66e801941ec8d8dccd203204326eb4fa9e98d9d9de574d26fd168" gracePeriod=120 Oct 11 10:37:48.072201 master-2 kubenswrapper[4776]: I1011 10:37:48.072127 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57bccbfdf6-2s9dn" event={"ID":"ae5dca9f-dad3-4712-86f9-3a3e537b5c99","Type":"ContainerStarted","Data":"96abc13c3c5fa7d6b33609ba42f139af90fc31f27879c88e7021022b86b662c8"} Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: I1011 10:37:48.459510 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:48.459570 master-2 kubenswrapper[4776]: I1011 10:37:48.459565 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: I1011 10:37:49.196150 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:49.196224 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:49.197503 master-2 kubenswrapper[4776]: I1011 10:37:49.196229 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:50.097381 master-2 kubenswrapper[4776]: I1011 10:37:50.097257 4776 generic.go:334] "Generic (PLEG): container finished" podID="c8cd90ff-e70c-4837-82c4-0fec67a8a51b" containerID="37bb400b73bf025924c7c9c5bb9e1d981b6e77aaa21f8e234850cbe27200bcf9" exitCode=0 Oct 11 10:37:50.097868 master-2 kubenswrapper[4776]: I1011 10:37:50.097709 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerDied","Data":"37bb400b73bf025924c7c9c5bb9e1d981b6e77aaa21f8e234850cbe27200bcf9"} Oct 11 10:37:50.097868 master-2 kubenswrapper[4776]: I1011 10:37:50.097749 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-57kcw" event={"ID":"c8cd90ff-e70c-4837-82c4-0fec67a8a51b","Type":"ContainerStarted","Data":"eeeb754b3aa286aa5e74205d303f35958d66321450cc7b407c8db19c823fb525"} Oct 11 10:37:50.097868 master-2 kubenswrapper[4776]: I1011 10:37:50.097769 4776 scope.go:117] "RemoveContainer" containerID="532aa417a81b450c3ee53605926d217d88d13c85c6a1d9e5ea21cc1e35ca9346" Oct 11 10:37:50.967141 master-2 kubenswrapper[4776]: I1011 10:37:50.967068 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:37:50.971385 master-2 kubenswrapper[4776]: I1011 10:37:50.971345 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:37:51.107013 master-2 kubenswrapper[4776]: I1011 10:37:51.106959 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:37:51.114718 master-2 kubenswrapper[4776]: I1011 10:37:51.114614 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5ddb89f76-57kcw" Oct 11 10:37:53.141486 master-2 kubenswrapper[4776]: I1011 10:37:53.141396 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57bccbfdf6-2s9dn" event={"ID":"ae5dca9f-dad3-4712-86f9-3a3e537b5c99","Type":"ContainerStarted","Data":"60848da2e2d10f9d66feb8c08460e0558bf900e390d4618db9b15ae08a0b1a6f"} Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: I1011 10:37:53.459501 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:53.459545 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:53.460119 master-2 kubenswrapper[4776]: I1011 10:37:53.459558 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:53.625584 master-2 kubenswrapper[4776]: I1011 10:37:53.625504 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57bccbfdf6-2s9dn" podStartSLOduration=2.3514108289999998 podStartE2EDuration="7.625488076s" podCreationTimestamp="2025-10-11 10:37:46 +0000 UTC" firstStartedPulling="2025-10-11 10:37:47.64685942 +0000 UTC m=+702.431286139" lastFinishedPulling="2025-10-11 10:37:52.920936677 +0000 UTC m=+707.705363386" observedRunningTime="2025-10-11 10:37:53.51064451 +0000 UTC m=+708.295071219" watchObservedRunningTime="2025-10-11 10:37:53.625488076 +0000 UTC m=+708.409914785" Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: I1011 10:37:54.197903 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:54.197959 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:54.200460 master-2 kubenswrapper[4776]: I1011 10:37:54.198879 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:54.200460 master-2 kubenswrapper[4776]: I1011 10:37:54.199211 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:37:57.181809 master-2 kubenswrapper[4776]: I1011 10:37:57.181720 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:57.183704 master-2 kubenswrapper[4776]: I1011 10:37:57.183588 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:37:57.185152 master-2 kubenswrapper[4776]: I1011 10:37:57.185112 4776 patch_prober.go:28] interesting pod/console-57bccbfdf6-2s9dn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.76:8443/health\": dial tcp 10.128.0.76:8443: connect: connection refused" start-of-body= Oct 11 10:37:57.185238 master-2 kubenswrapper[4776]: I1011 10:37:57.185168 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-57bccbfdf6-2s9dn" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerName="console" probeResult="failure" output="Get \"https://10.128.0.76:8443/health\": dial tcp 10.128.0.76:8443: connect: connection refused" Oct 11 10:37:58.062331 master-2 kubenswrapper[4776]: I1011 10:37:58.062271 4776 scope.go:117] "RemoveContainer" containerID="89704c12769118c53c22d7f82d393e22678a4835f23d73f837dd13b143b58cd8" Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: I1011 10:37:58.459404 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:58.459460 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:58.461096 master-2 kubenswrapper[4776]: I1011 10:37:58.459462 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:37:58.461096 master-2 kubenswrapper[4776]: I1011 10:37:58.459933 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:37:59.176997 master-2 kubenswrapper[4776]: I1011 10:37:59.176936 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2" event={"ID":"e540333c-4b4d-439e-a82a-cd3a97c95a43","Type":"ContainerStarted","Data":"2b9f0d27b6f21bda8ce6285e683e7a0f5ef61713f522d8ef354c5ed8789a85fa"} Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: I1011 10:37:59.195755 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:37:59.195785 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:37:59.196349 master-2 kubenswrapper[4776]: I1011 10:37:59.195805 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:00.909132 master-2 kubenswrapper[4776]: I1011 10:38:00.908950 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj"] Oct 11 10:38:00.910340 master-2 kubenswrapper[4776]: I1011 10:38:00.910308 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:00.912637 master-2 kubenswrapper[4776]: I1011 10:38:00.912588 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 11 10:38:00.912793 master-2 kubenswrapper[4776]: I1011 10:38:00.912764 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 11 10:38:00.959977 master-2 kubenswrapper[4776]: I1011 10:38:00.959925 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj"] Oct 11 10:38:00.969429 master-2 kubenswrapper[4776]: I1011 10:38:00.969383 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/db93c795-02b8-4e94-9fdc-bdc616f05e56-nginx-conf\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:00.969587 master-2 kubenswrapper[4776]: I1011 10:38:00.969431 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/db93c795-02b8-4e94-9fdc-bdc616f05e56-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.070218 master-2 kubenswrapper[4776]: I1011 10:38:01.070161 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/db93c795-02b8-4e94-9fdc-bdc616f05e56-nginx-conf\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.070218 master-2 kubenswrapper[4776]: I1011 10:38:01.070211 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/db93c795-02b8-4e94-9fdc-bdc616f05e56-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.070477 master-2 kubenswrapper[4776]: E1011 10:38:01.070325 4776 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Oct 11 10:38:01.070477 master-2 kubenswrapper[4776]: E1011 10:38:01.070400 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db93c795-02b8-4e94-9fdc-bdc616f05e56-networking-console-plugin-cert podName:db93c795-02b8-4e94-9fdc-bdc616f05e56 nodeName:}" failed. No retries permitted until 2025-10-11 10:38:01.570381177 +0000 UTC m=+716.354807896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/db93c795-02b8-4e94-9fdc-bdc616f05e56-networking-console-plugin-cert") pod "networking-console-plugin-85df6bdd68-qsxrj" (UID: "db93c795-02b8-4e94-9fdc-bdc616f05e56") : secret "networking-console-plugin-cert" not found Oct 11 10:38:01.071364 master-2 kubenswrapper[4776]: I1011 10:38:01.071336 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/db93c795-02b8-4e94-9fdc-bdc616f05e56-nginx-conf\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.580306 master-2 kubenswrapper[4776]: I1011 10:38:01.580231 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/db93c795-02b8-4e94-9fdc-bdc616f05e56-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.584290 master-2 kubenswrapper[4776]: I1011 10:38:01.584220 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/db93c795-02b8-4e94-9fdc-bdc616f05e56-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-qsxrj\" (UID: \"db93c795-02b8-4e94-9fdc-bdc616f05e56\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:01.824487 master-2 kubenswrapper[4776]: I1011 10:38:01.824401 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" Oct 11 10:38:02.290172 master-2 kubenswrapper[4776]: I1011 10:38:02.289985 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj"] Oct 11 10:38:02.293415 master-2 kubenswrapper[4776]: W1011 10:38:02.293384 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb93c795_02b8_4e94_9fdc_bdc616f05e56.slice/crio-e83ef9478383c0e735705ea9054c055d55170e99077c9340f1d40de1cbe08bc7 WatchSource:0}: Error finding container e83ef9478383c0e735705ea9054c055d55170e99077c9340f1d40de1cbe08bc7: Status 404 returned error can't find the container with id e83ef9478383c0e735705ea9054c055d55170e99077c9340f1d40de1cbe08bc7 Oct 11 10:38:03.211997 master-2 kubenswrapper[4776]: I1011 10:38:03.211942 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" event={"ID":"db93c795-02b8-4e94-9fdc-bdc616f05e56","Type":"ContainerStarted","Data":"e83ef9478383c0e735705ea9054c055d55170e99077c9340f1d40de1cbe08bc7"} Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: I1011 10:38:03.460465 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:03.460561 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:03.461411 master-2 kubenswrapper[4776]: I1011 10:38:03.460596 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: I1011 10:38:04.196454 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:04.196511 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:04.197302 master-2 kubenswrapper[4776]: I1011 10:38:04.196517 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:04.219303 master-2 kubenswrapper[4776]: I1011 10:38:04.219253 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" event={"ID":"db93c795-02b8-4e94-9fdc-bdc616f05e56","Type":"ContainerStarted","Data":"1e4f9b093cb4a2472154a1759b750613d1a4988c25914dfef5cf3d4ab591df57"} Oct 11 10:38:04.240739 master-2 kubenswrapper[4776]: I1011 10:38:04.240288 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj" podStartSLOduration=2.816745739 podStartE2EDuration="4.240267788s" podCreationTimestamp="2025-10-11 10:38:00 +0000 UTC" firstStartedPulling="2025-10-11 10:38:02.296456121 +0000 UTC m=+717.080882820" lastFinishedPulling="2025-10-11 10:38:03.71997816 +0000 UTC m=+718.504404869" observedRunningTime="2025-10-11 10:38:04.238037617 +0000 UTC m=+719.022464326" watchObservedRunningTime="2025-10-11 10:38:04.240267788 +0000 UTC m=+719.024694497" Oct 11 10:38:07.182166 master-2 kubenswrapper[4776]: I1011 10:38:07.181772 4776 patch_prober.go:28] interesting pod/console-57bccbfdf6-2s9dn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.76:8443/health\": dial tcp 10.128.0.76:8443: connect: connection refused" start-of-body= Oct 11 10:38:07.182166 master-2 kubenswrapper[4776]: I1011 10:38:07.181825 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-57bccbfdf6-2s9dn" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerName="console" probeResult="failure" output="Get \"https://10.128.0.76:8443/health\": dial tcp 10.128.0.76:8443: connect: connection refused" Oct 11 10:38:07.922167 master-2 kubenswrapper[4776]: I1011 10:38:07.921774 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57bccbfdf6-2s9dn"] Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: I1011 10:38:08.458347 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:08.458416 master-2 kubenswrapper[4776]: I1011 10:38:08.458411 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: I1011 10:38:09.196545 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:09.196611 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:09.197421 master-2 kubenswrapper[4776]: I1011 10:38:09.196610 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:11.347938 master-2 kubenswrapper[4776]: I1011 10:38:11.347866 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-2"] Oct 11 10:38:11.348947 master-2 kubenswrapper[4776]: I1011 10:38:11.348906 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.352014 master-2 kubenswrapper[4776]: I1011 10:38:11.351556 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-js756" Oct 11 10:38:11.359567 master-2 kubenswrapper[4776]: I1011 10:38:11.359525 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-2"] Oct 11 10:38:11.535141 master-2 kubenswrapper[4776]: I1011 10:38:11.535041 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.535141 master-2 kubenswrapper[4776]: I1011 10:38:11.535137 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8755f64d-7ff8-4df3-ae55-c1154ba02830-kube-api-access\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.535431 master-2 kubenswrapper[4776]: I1011 10:38:11.535186 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-var-lock\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.636528 master-2 kubenswrapper[4776]: I1011 10:38:11.636412 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8755f64d-7ff8-4df3-ae55-c1154ba02830-kube-api-access\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.636528 master-2 kubenswrapper[4776]: I1011 10:38:11.636466 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-var-lock\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.636528 master-2 kubenswrapper[4776]: I1011 10:38:11.636518 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.636846 master-2 kubenswrapper[4776]: I1011 10:38:11.636587 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.636846 master-2 kubenswrapper[4776]: I1011 10:38:11.636686 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-var-lock\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.673251 master-2 kubenswrapper[4776]: I1011 10:38:11.673179 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8755f64d-7ff8-4df3-ae55-c1154ba02830-kube-api-access\") pod \"installer-6-master-2\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:11.963596 master-2 kubenswrapper[4776]: I1011 10:38:11.963525 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: I1011 10:38:13.459374 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:13.459431 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:13.460293 master-2 kubenswrapper[4776]: I1011 10:38:13.459442 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: I1011 10:38:14.196284 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:14.196344 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:14.197191 master-2 kubenswrapper[4776]: I1011 10:38:14.196364 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:14.690636 master-2 kubenswrapper[4776]: I1011 10:38:14.690590 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-r24pr"] Oct 11 10:38:15.649450 master-2 kubenswrapper[4776]: I1011 10:38:15.649362 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-2"] Oct 11 10:38:15.651340 master-2 kubenswrapper[4776]: W1011 10:38:15.651296 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8755f64d_7ff8_4df3_ae55_c1154ba02830.slice/crio-778cb332dda0b2164f4a18cca4a33c538ed0cde607af7400e50992351c45e610 WatchSource:0}: Error finding container 778cb332dda0b2164f4a18cca4a33c538ed0cde607af7400e50992351c45e610: Status 404 returned error can't find the container with id 778cb332dda0b2164f4a18cca4a33c538ed0cde607af7400e50992351c45e610 Oct 11 10:38:16.286856 master-2 kubenswrapper[4776]: I1011 10:38:16.286646 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-2" event={"ID":"8755f64d-7ff8-4df3-ae55-c1154ba02830","Type":"ContainerStarted","Data":"9153289ceddc1077d563995a41dced39a8a3e20ad2f9b47e07f851d3852a7efc"} Oct 11 10:38:16.286856 master-2 kubenswrapper[4776]: I1011 10:38:16.286709 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-2" event={"ID":"8755f64d-7ff8-4df3-ae55-c1154ba02830","Type":"ContainerStarted","Data":"778cb332dda0b2164f4a18cca4a33c538ed0cde607af7400e50992351c45e610"} Oct 11 10:38:16.288740 master-2 kubenswrapper[4776]: I1011 10:38:16.288660 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65bb9777fc-bkmsm" event={"ID":"31d64616-a514-4ae3-bb6d-d6eb14d9147a","Type":"ContainerStarted","Data":"cb5441fccf13a8d6993a2d076504c5ed5dd8298eee96bf1cf619fbea6519355c"} Oct 11 10:38:16.289443 master-2 kubenswrapper[4776]: I1011 10:38:16.289421 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:38:16.291238 master-2 kubenswrapper[4776]: I1011 10:38:16.291136 4776 patch_prober.go:28] interesting pod/downloads-65bb9777fc-bkmsm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.75:8080/\": dial tcp 10.128.0.75:8080: connect: connection refused" start-of-body= Oct 11 10:38:16.291238 master-2 kubenswrapper[4776]: I1011 10:38:16.291187 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65bb9777fc-bkmsm" podUID="31d64616-a514-4ae3-bb6d-d6eb14d9147a" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.75:8080/\": dial tcp 10.128.0.75:8080: connect: connection refused" Oct 11 10:38:17.295078 master-2 kubenswrapper[4776]: I1011 10:38:17.294998 4776 patch_prober.go:28] interesting pod/downloads-65bb9777fc-bkmsm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.75:8080/\": dial tcp 10.128.0.75:8080: connect: connection refused" start-of-body= Oct 11 10:38:17.295078 master-2 kubenswrapper[4776]: I1011 10:38:17.295074 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65bb9777fc-bkmsm" podUID="31d64616-a514-4ae3-bb6d-d6eb14d9147a" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.75:8080/\": dial tcp 10.128.0.75:8080: connect: connection refused" Oct 11 10:38:17.449902 master-2 kubenswrapper[4776]: I1011 10:38:17.444698 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-65bb9777fc-bkmsm" podStartSLOduration=3.625507675 podStartE2EDuration="34.444656343s" podCreationTimestamp="2025-10-11 10:37:43 +0000 UTC" firstStartedPulling="2025-10-11 10:37:44.401075091 +0000 UTC m=+699.185501820" lastFinishedPulling="2025-10-11 10:38:15.220223779 +0000 UTC m=+730.004650488" observedRunningTime="2025-10-11 10:38:17.442522854 +0000 UTC m=+732.226949583" watchObservedRunningTime="2025-10-11 10:38:17.444656343 +0000 UTC m=+732.229083052" Oct 11 10:38:17.849475 master-2 kubenswrapper[4776]: I1011 10:38:17.849391 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-v6dfc_8757af56-20fb-439e-adba-7e4e50378936/assisted-installer-controller/0.log" Oct 11 10:38:18.299064 master-2 kubenswrapper[4776]: I1011 10:38:18.298982 4776 patch_prober.go:28] interesting pod/downloads-65bb9777fc-bkmsm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.75:8080/\": dial tcp 10.128.0.75:8080: connect: connection refused" start-of-body= Oct 11 10:38:18.299064 master-2 kubenswrapper[4776]: I1011 10:38:18.299041 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65bb9777fc-bkmsm" podUID="31d64616-a514-4ae3-bb6d-d6eb14d9147a" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.75:8080/\": dial tcp 10.128.0.75:8080: connect: connection refused" Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: I1011 10:38:18.458638 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:18.458729 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:18.459458 master-2 kubenswrapper[4776]: I1011 10:38:18.458754 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:18.548961 master-2 kubenswrapper[4776]: I1011 10:38:18.548884 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-master-2" podStartSLOduration=7.54886682 podStartE2EDuration="7.54886682s" podCreationTimestamp="2025-10-11 10:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:18.545067376 +0000 UTC m=+733.329494125" watchObservedRunningTime="2025-10-11 10:38:18.54886682 +0000 UTC m=+733.333293529" Oct 11 10:38:18.984561 master-2 kubenswrapper[4776]: I1011 10:38:18.984438 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-crzgp"] Oct 11 10:38:18.985193 master-2 kubenswrapper[4776]: I1011 10:38:18.985108 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" containerID="cri-o://bc423808a1318a501a04a81a0b62715e5af3476c9da3fb5de99b8aa1ff2380a0" gracePeriod=170 Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: I1011 10:38:19.199591 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:19.199711 master-2 kubenswrapper[4776]: I1011 10:38:19.199653 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: I1011 10:38:23.460408 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:23.460524 master-2 kubenswrapper[4776]: I1011 10:38:23.460505 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:23.907219 master-2 kubenswrapper[4776]: I1011 10:38:23.907149 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-65bb9777fc-bkmsm" Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: I1011 10:38:24.198447 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:24.198821 master-2 kubenswrapper[4776]: I1011 10:38:24.198576 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: I1011 10:38:28.461201 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:28.461274 master-2 kubenswrapper[4776]: I1011 10:38:28.461270 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: I1011 10:38:29.196776 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:29.196838 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:29.198039 master-2 kubenswrapper[4776]: I1011 10:38:29.196840 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: I1011 10:38:30.266828 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:30.266892 master-2 kubenswrapper[4776]: I1011 10:38:30.266889 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:32.956730 master-2 kubenswrapper[4776]: I1011 10:38:32.956603 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-57bccbfdf6-2s9dn" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerName="console" containerID="cri-o://60848da2e2d10f9d66feb8c08460e0558bf900e390d4618db9b15ae08a0b1a6f" gracePeriod=15 Oct 11 10:38:33.425525 master-2 kubenswrapper[4776]: I1011 10:38:33.425349 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57bccbfdf6-2s9dn_ae5dca9f-dad3-4712-86f9-3a3e537b5c99/console/0.log" Oct 11 10:38:33.425525 master-2 kubenswrapper[4776]: I1011 10:38:33.425399 4776 generic.go:334] "Generic (PLEG): container finished" podID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerID="60848da2e2d10f9d66feb8c08460e0558bf900e390d4618db9b15ae08a0b1a6f" exitCode=2 Oct 11 10:38:33.425525 master-2 kubenswrapper[4776]: I1011 10:38:33.425428 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57bccbfdf6-2s9dn" event={"ID":"ae5dca9f-dad3-4712-86f9-3a3e537b5c99","Type":"ContainerDied","Data":"60848da2e2d10f9d66feb8c08460e0558bf900e390d4618db9b15ae08a0b1a6f"} Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: I1011 10:38:33.459374 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:33.459447 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:33.459999 master-2 kubenswrapper[4776]: I1011 10:38:33.459445 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:34.163048 master-2 kubenswrapper[4776]: I1011 10:38:34.162985 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57bccbfdf6-2s9dn_ae5dca9f-dad3-4712-86f9-3a3e537b5c99/console/0.log" Oct 11 10:38:34.163048 master-2 kubenswrapper[4776]: I1011 10:38:34.163056 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: I1011 10:38:34.200399 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:34.200515 master-2 kubenswrapper[4776]: I1011 10:38:34.200494 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:34.286297 master-2 kubenswrapper[4776]: I1011 10:38:34.286196 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-config\") pod \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " Oct 11 10:38:34.286862 master-2 kubenswrapper[4776]: I1011 10:38:34.286398 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-serving-cert\") pod \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " Oct 11 10:38:34.286862 master-2 kubenswrapper[4776]: I1011 10:38:34.286462 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-oauth-serving-cert\") pod \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " Oct 11 10:38:34.286862 master-2 kubenswrapper[4776]: I1011 10:38:34.286538 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-oauth-config\") pod \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " Oct 11 10:38:34.286862 master-2 kubenswrapper[4776]: I1011 10:38:34.286608 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-service-ca\") pod \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " Oct 11 10:38:34.286862 master-2 kubenswrapper[4776]: I1011 10:38:34.286670 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rz6c\" (UniqueName: \"kubernetes.io/projected/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-kube-api-access-6rz6c\") pod \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\" (UID: \"ae5dca9f-dad3-4712-86f9-3a3e537b5c99\") " Oct 11 10:38:34.286862 master-2 kubenswrapper[4776]: I1011 10:38:34.286772 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-config" (OuterVolumeSpecName: "console-config") pod "ae5dca9f-dad3-4712-86f9-3a3e537b5c99" (UID: "ae5dca9f-dad3-4712-86f9-3a3e537b5c99"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:34.287479 master-2 kubenswrapper[4776]: I1011 10:38:34.287326 4776 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:34.287479 master-2 kubenswrapper[4776]: I1011 10:38:34.287212 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-service-ca" (OuterVolumeSpecName: "service-ca") pod "ae5dca9f-dad3-4712-86f9-3a3e537b5c99" (UID: "ae5dca9f-dad3-4712-86f9-3a3e537b5c99"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:34.287879 master-2 kubenswrapper[4776]: I1011 10:38:34.287778 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ae5dca9f-dad3-4712-86f9-3a3e537b5c99" (UID: "ae5dca9f-dad3-4712-86f9-3a3e537b5c99"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:34.289759 master-2 kubenswrapper[4776]: I1011 10:38:34.289655 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ae5dca9f-dad3-4712-86f9-3a3e537b5c99" (UID: "ae5dca9f-dad3-4712-86f9-3a3e537b5c99"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:34.289887 master-2 kubenswrapper[4776]: I1011 10:38:34.289774 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ae5dca9f-dad3-4712-86f9-3a3e537b5c99" (UID: "ae5dca9f-dad3-4712-86f9-3a3e537b5c99"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:34.292207 master-2 kubenswrapper[4776]: I1011 10:38:34.292151 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-kube-api-access-6rz6c" (OuterVolumeSpecName: "kube-api-access-6rz6c") pod "ae5dca9f-dad3-4712-86f9-3a3e537b5c99" (UID: "ae5dca9f-dad3-4712-86f9-3a3e537b5c99"). InnerVolumeSpecName "kube-api-access-6rz6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:38:34.388713 master-2 kubenswrapper[4776]: I1011 10:38:34.388514 4776 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-oauth-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:34.388713 master-2 kubenswrapper[4776]: I1011 10:38:34.388557 4776 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-service-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:34.388713 master-2 kubenswrapper[4776]: I1011 10:38:34.388570 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rz6c\" (UniqueName: \"kubernetes.io/projected/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-kube-api-access-6rz6c\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:34.388713 master-2 kubenswrapper[4776]: I1011 10:38:34.388578 4776 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-console-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:34.388713 master-2 kubenswrapper[4776]: I1011 10:38:34.388586 4776 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ae5dca9f-dad3-4712-86f9-3a3e537b5c99-oauth-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:34.431648 master-2 kubenswrapper[4776]: I1011 10:38:34.431581 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-57bccbfdf6-2s9dn_ae5dca9f-dad3-4712-86f9-3a3e537b5c99/console/0.log" Oct 11 10:38:34.431648 master-2 kubenswrapper[4776]: I1011 10:38:34.431634 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57bccbfdf6-2s9dn" event={"ID":"ae5dca9f-dad3-4712-86f9-3a3e537b5c99","Type":"ContainerDied","Data":"96abc13c3c5fa7d6b33609ba42f139af90fc31f27879c88e7021022b86b662c8"} Oct 11 10:38:34.431966 master-2 kubenswrapper[4776]: I1011 10:38:34.431695 4776 scope.go:117] "RemoveContainer" containerID="60848da2e2d10f9d66feb8c08460e0558bf900e390d4618db9b15ae08a0b1a6f" Oct 11 10:38:34.431966 master-2 kubenswrapper[4776]: I1011 10:38:34.431703 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57bccbfdf6-2s9dn" Oct 11 10:38:35.001960 master-2 kubenswrapper[4776]: I1011 10:38:35.001892 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-57bccbfdf6-2s9dn"] Oct 11 10:38:35.020327 master-2 kubenswrapper[4776]: I1011 10:38:35.015695 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-57bccbfdf6-2s9dn"] Oct 11 10:38:36.079483 master-2 kubenswrapper[4776]: I1011 10:38:36.079393 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" path="/var/lib/kubelet/pods/ae5dca9f-dad3-4712-86f9-3a3e537b5c99/volumes" Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: I1011 10:38:38.459150 4776 patch_prober.go:28] interesting pod/apiserver-68f4c55ff4-tv729 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:38.459223 master-2 kubenswrapper[4776]: I1011 10:38:38.459214 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:39.095713 master-2 kubenswrapper[4776]: I1011 10:38:39.095617 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76f8bc4746-5jp5k"] Oct 11 10:38:39.096023 master-2 kubenswrapper[4776]: E1011 10:38:39.095970 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerName="console" Oct 11 10:38:39.096023 master-2 kubenswrapper[4776]: I1011 10:38:39.095992 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerName="console" Oct 11 10:38:39.096237 master-2 kubenswrapper[4776]: I1011 10:38:39.096197 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5dca9f-dad3-4712-86f9-3a3e537b5c99" containerName="console" Oct 11 10:38:39.097023 master-2 kubenswrapper[4776]: I1011 10:38:39.096980 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.101950 master-2 kubenswrapper[4776]: I1011 10:38:39.101879 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8wmjp" Oct 11 10:38:39.101950 master-2 kubenswrapper[4776]: I1011 10:38:39.101884 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 11 10:38:39.102720 master-2 kubenswrapper[4776]: I1011 10:38:39.102649 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 11 10:38:39.102931 master-2 kubenswrapper[4776]: I1011 10:38:39.102729 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 11 10:38:39.103023 master-2 kubenswrapper[4776]: I1011 10:38:39.102805 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 11 10:38:39.107720 master-2 kubenswrapper[4776]: I1011 10:38:39.107635 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 11 10:38:39.112758 master-2 kubenswrapper[4776]: I1011 10:38:39.112717 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 11 10:38:39.157434 master-2 kubenswrapper[4776]: I1011 10:38:39.157348 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nf86\" (UniqueName: \"kubernetes.io/projected/9c72970e-d35b-4f28-8291-e3ed3683c59c-kube-api-access-7nf86\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.157744 master-2 kubenswrapper[4776]: I1011 10:38:39.157489 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-service-ca\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.157744 master-2 kubenswrapper[4776]: I1011 10:38:39.157556 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-trusted-ca-bundle\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.157969 master-2 kubenswrapper[4776]: I1011 10:38:39.157811 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-oauth-config\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.157969 master-2 kubenswrapper[4776]: I1011 10:38:39.157889 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-serving-cert\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.157969 master-2 kubenswrapper[4776]: I1011 10:38:39.157944 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-config\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.158160 master-2 kubenswrapper[4776]: I1011 10:38:39.158053 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-oauth-serving-cert\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.193280 master-2 kubenswrapper[4776]: I1011 10:38:39.193185 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:38:39.193280 master-2 kubenswrapper[4776]: I1011 10:38:39.193256 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:38:39.260153 master-2 kubenswrapper[4776]: I1011 10:38:39.260057 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-oauth-serving-cert\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.260433 master-2 kubenswrapper[4776]: I1011 10:38:39.260226 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nf86\" (UniqueName: \"kubernetes.io/projected/9c72970e-d35b-4f28-8291-e3ed3683c59c-kube-api-access-7nf86\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.260433 master-2 kubenswrapper[4776]: I1011 10:38:39.260265 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-service-ca\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.260433 master-2 kubenswrapper[4776]: I1011 10:38:39.260304 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-trusted-ca-bundle\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.260433 master-2 kubenswrapper[4776]: I1011 10:38:39.260397 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-oauth-config\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.260569 master-2 kubenswrapper[4776]: I1011 10:38:39.260459 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-serving-cert\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.260569 master-2 kubenswrapper[4776]: I1011 10:38:39.260510 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-config\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.262399 master-2 kubenswrapper[4776]: I1011 10:38:39.262302 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-service-ca\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.262753 master-2 kubenswrapper[4776]: I1011 10:38:39.262644 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-oauth-serving-cert\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.263041 master-2 kubenswrapper[4776]: I1011 10:38:39.262988 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-config\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.263413 master-2 kubenswrapper[4776]: I1011 10:38:39.263325 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-trusted-ca-bundle\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.265793 master-2 kubenswrapper[4776]: I1011 10:38:39.265755 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-oauth-config\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.266783 master-2 kubenswrapper[4776]: I1011 10:38:39.266733 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-serving-cert\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:39.464900 master-2 kubenswrapper[4776]: I1011 10:38:39.464806 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76f8bc4746-5jp5k"] Oct 11 10:38:39.728107 master-2 kubenswrapper[4776]: I1011 10:38:39.727942 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" podUID="dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" containerName="oauth-openshift" containerID="cri-o://4c53ab08cf4b5166d95c57913daeef16e08566e476b981ef95245c117bb87d6a" gracePeriod=15 Oct 11 10:38:39.883099 master-2 kubenswrapper[4776]: I1011 10:38:39.883005 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nf86\" (UniqueName: \"kubernetes.io/projected/9c72970e-d35b-4f28-8291-e3ed3683c59c-kube-api-access-7nf86\") pod \"console-76f8bc4746-5jp5k\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:40.022499 master-2 kubenswrapper[4776]: I1011 10:38:40.022375 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:41.482120 master-2 kubenswrapper[4776]: I1011 10:38:41.481973 4776 generic.go:334] "Generic (PLEG): container finished" podID="dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" containerID="4c53ab08cf4b5166d95c57913daeef16e08566e476b981ef95245c117bb87d6a" exitCode=0 Oct 11 10:38:41.482120 master-2 kubenswrapper[4776]: I1011 10:38:41.482025 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" event={"ID":"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369","Type":"ContainerDied","Data":"4c53ab08cf4b5166d95c57913daeef16e08566e476b981ef95245c117bb87d6a"} Oct 11 10:38:41.535704 master-2 kubenswrapper[4776]: I1011 10:38:41.535627 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:38:41.602631 master-2 kubenswrapper[4776]: I1011 10:38:41.602544 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-provider-selection\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.602631 master-2 kubenswrapper[4776]: I1011 10:38:41.602634 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-cliconfig\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.602711 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-policies\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.602757 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfkq8\" (UniqueName: \"kubernetes.io/projected/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-kube-api-access-jfkq8\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.602795 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-service-ca\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.602849 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-login\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.602892 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-session\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.602972 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-error\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.603004 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-dir\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.603040 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-router-certs\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.603073 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-ocp-branding-template\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603124 master-2 kubenswrapper[4776]: I1011 10:38:41.603111 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-trusted-ca-bundle\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603177 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-serving-cert\") pod \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\" (UID: \"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369\") " Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603436 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603501 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603503 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603607 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603629 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603752 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-cliconfig\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603783 4776 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-audit-policies\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.603996 master-2 kubenswrapper[4776]: I1011 10:38:41.603893 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:41.605807 master-2 kubenswrapper[4776]: I1011 10:38:41.605749 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.605946 master-2 kubenswrapper[4776]: I1011 10:38:41.605818 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.606131 master-2 kubenswrapper[4776]: I1011 10:38:41.606051 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.606568 master-2 kubenswrapper[4776]: I1011 10:38:41.606513 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.606820 master-2 kubenswrapper[4776]: I1011 10:38:41.606768 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.606916 master-2 kubenswrapper[4776]: I1011 10:38:41.606862 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.607969 master-2 kubenswrapper[4776]: I1011 10:38:41.607909 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:41.608081 master-2 kubenswrapper[4776]: I1011 10:38:41.607963 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-kube-api-access-jfkq8" (OuterVolumeSpecName: "kube-api-access-jfkq8") pod "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" (UID: "dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369"). InnerVolumeSpecName "kube-api-access-jfkq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:38:41.705321 master-2 kubenswrapper[4776]: I1011 10:38:41.705223 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-error\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705321 master-2 kubenswrapper[4776]: I1011 10:38:41.705293 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-router-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705321 master-2 kubenswrapper[4776]: I1011 10:38:41.705314 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-ocp-branding-template\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705321 master-2 kubenswrapper[4776]: I1011 10:38:41.705334 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705872 master-2 kubenswrapper[4776]: I1011 10:38:41.705354 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705872 master-2 kubenswrapper[4776]: I1011 10:38:41.705372 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-provider-selection\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705872 master-2 kubenswrapper[4776]: I1011 10:38:41.705391 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfkq8\" (UniqueName: \"kubernetes.io/projected/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-kube-api-access-jfkq8\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705872 master-2 kubenswrapper[4776]: I1011 10:38:41.705410 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-service-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705872 master-2 kubenswrapper[4776]: I1011 10:38:41.705427 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-user-template-login\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:41.705872 master-2 kubenswrapper[4776]: I1011 10:38:41.705448 4776 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369-v4-0-config-system-session\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:42.492595 master-2 kubenswrapper[4776]: I1011 10:38:42.492486 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" event={"ID":"dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369","Type":"ContainerDied","Data":"fb05ca004bb431ae259dec9c7bc562a3772d43d4b0ba3d1a323b0aee4334c90e"} Oct 11 10:38:42.493493 master-2 kubenswrapper[4776]: I1011 10:38:42.492624 4776 scope.go:117] "RemoveContainer" containerID="4c53ab08cf4b5166d95c57913daeef16e08566e476b981ef95245c117bb87d6a" Oct 11 10:38:42.493493 master-2 kubenswrapper[4776]: I1011 10:38:42.493105 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68fb97bcc4-r24pr" Oct 11 10:38:42.496741 master-2 kubenswrapper[4776]: I1011 10:38:42.496579 4776 generic.go:334] "Generic (PLEG): container finished" podID="cc095688-9188-4472-9c26-d4d286e5ef06" containerID="6dd550d507d66e801941ec8d8dccd203204326eb4fa9e98d9d9de574d26fd168" exitCode=0 Oct 11 10:38:42.496741 master-2 kubenswrapper[4776]: I1011 10:38:42.496636 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" event={"ID":"cc095688-9188-4472-9c26-d4d286e5ef06","Type":"ContainerDied","Data":"6dd550d507d66e801941ec8d8dccd203204326eb4fa9e98d9d9de574d26fd168"} Oct 11 10:38:42.710059 master-2 kubenswrapper[4776]: I1011 10:38:42.710007 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76f8bc4746-5jp5k"] Oct 11 10:38:42.722050 master-2 kubenswrapper[4776]: I1011 10:38:42.721990 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-r24pr"] Oct 11 10:38:42.734016 master-2 kubenswrapper[4776]: I1011 10:38:42.733959 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-68fb97bcc4-r24pr"] Oct 11 10:38:43.032062 master-2 kubenswrapper[4776]: I1011 10:38:43.032033 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:38:43.123368 master-2 kubenswrapper[4776]: I1011 10:38:43.123297 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-client\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123368 master-2 kubenswrapper[4776]: I1011 10:38:43.123371 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-serving-cert\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123630 master-2 kubenswrapper[4776]: I1011 10:38:43.123417 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-audit-policies\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123630 master-2 kubenswrapper[4776]: I1011 10:38:43.123448 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-encryption-config\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123630 master-2 kubenswrapper[4776]: I1011 10:38:43.123491 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54fhv\" (UniqueName: \"kubernetes.io/projected/cc095688-9188-4472-9c26-d4d286e5ef06-kube-api-access-54fhv\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123630 master-2 kubenswrapper[4776]: I1011 10:38:43.123519 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-serving-ca\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123630 master-2 kubenswrapper[4776]: I1011 10:38:43.123543 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc095688-9188-4472-9c26-d4d286e5ef06-audit-dir\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.123630 master-2 kubenswrapper[4776]: I1011 10:38:43.123602 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-trusted-ca-bundle\") pod \"cc095688-9188-4472-9c26-d4d286e5ef06\" (UID: \"cc095688-9188-4472-9c26-d4d286e5ef06\") " Oct 11 10:38:43.124559 master-2 kubenswrapper[4776]: I1011 10:38:43.124516 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:43.124559 master-2 kubenswrapper[4776]: I1011 10:38:43.124524 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:43.124667 master-2 kubenswrapper[4776]: I1011 10:38:43.124606 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc095688-9188-4472-9c26-d4d286e5ef06-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:43.124995 master-2 kubenswrapper[4776]: I1011 10:38:43.124957 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:38:43.127089 master-2 kubenswrapper[4776]: I1011 10:38:43.127054 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:43.127391 master-2 kubenswrapper[4776]: I1011 10:38:43.127337 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:43.128088 master-2 kubenswrapper[4776]: I1011 10:38:43.128034 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:38:43.128355 master-2 kubenswrapper[4776]: I1011 10:38:43.128315 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc095688-9188-4472-9c26-d4d286e5ef06-kube-api-access-54fhv" (OuterVolumeSpecName: "kube-api-access-54fhv") pod "cc095688-9188-4472-9c26-d4d286e5ef06" (UID: "cc095688-9188-4472-9c26-d4d286e5ef06"). InnerVolumeSpecName "kube-api-access-54fhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:38:43.225206 master-2 kubenswrapper[4776]: I1011 10:38:43.225153 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225206 master-2 kubenswrapper[4776]: I1011 10:38:43.225192 4776 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-audit-policies\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225206 master-2 kubenswrapper[4776]: I1011 10:38:43.225202 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225206 master-2 kubenswrapper[4776]: I1011 10:38:43.225212 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54fhv\" (UniqueName: \"kubernetes.io/projected/cc095688-9188-4472-9c26-d4d286e5ef06-kube-api-access-54fhv\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225206 master-2 kubenswrapper[4776]: I1011 10:38:43.225222 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225567 master-2 kubenswrapper[4776]: I1011 10:38:43.225231 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc095688-9188-4472-9c26-d4d286e5ef06-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225567 master-2 kubenswrapper[4776]: I1011 10:38:43.225239 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc095688-9188-4472-9c26-d4d286e5ef06-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.225567 master-2 kubenswrapper[4776]: I1011 10:38:43.225247 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc095688-9188-4472-9c26-d4d286e5ef06-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:43.503427 master-2 kubenswrapper[4776]: I1011 10:38:43.503339 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f8bc4746-5jp5k" event={"ID":"9c72970e-d35b-4f28-8291-e3ed3683c59c","Type":"ContainerStarted","Data":"ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f"} Oct 11 10:38:43.503427 master-2 kubenswrapper[4776]: I1011 10:38:43.503386 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f8bc4746-5jp5k" event={"ID":"9c72970e-d35b-4f28-8291-e3ed3683c59c","Type":"ContainerStarted","Data":"3717643475eebdbec50aa27932ca525c2e2f047c2a23862ba4394759fc5478d9"} Oct 11 10:38:43.507334 master-2 kubenswrapper[4776]: I1011 10:38:43.507242 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" event={"ID":"cc095688-9188-4472-9c26-d4d286e5ef06","Type":"ContainerDied","Data":"38caa553aa3028fefa0c3bd77280e5deedf30358e11b27817863ca0e8b11f26f"} Oct 11 10:38:43.507334 master-2 kubenswrapper[4776]: I1011 10:38:43.507335 4776 scope.go:117] "RemoveContainer" containerID="6dd550d507d66e801941ec8d8dccd203204326eb4fa9e98d9d9de574d26fd168" Oct 11 10:38:43.507334 master-2 kubenswrapper[4776]: I1011 10:38:43.507335 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729" Oct 11 10:38:43.521656 master-2 kubenswrapper[4776]: I1011 10:38:43.521588 4776 scope.go:117] "RemoveContainer" containerID="890baf1a750c905b81b3a86397294058183d567d6c2fdd860242c1e809168b9e" Oct 11 10:38:43.565073 master-2 kubenswrapper[4776]: I1011 10:38:43.565014 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76f8bc4746-5jp5k" podStartSLOduration=36.564993715 podStartE2EDuration="36.564993715s" podCreationTimestamp="2025-10-11 10:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:43.535405749 +0000 UTC m=+758.319832458" watchObservedRunningTime="2025-10-11 10:38:43.564993715 +0000 UTC m=+758.349420424" Oct 11 10:38:43.574692 master-2 kubenswrapper[4776]: I1011 10:38:43.566922 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729"] Oct 11 10:38:43.580783 master-2 kubenswrapper[4776]: I1011 10:38:43.577548 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729"] Oct 11 10:38:43.710352 master-2 kubenswrapper[4776]: I1011 10:38:43.710257 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6fccd5ccc-lxq75"] Oct 11 10:38:43.710609 master-2 kubenswrapper[4776]: E1011 10:38:43.710572 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="fix-audit-permissions" Oct 11 10:38:43.710609 master-2 kubenswrapper[4776]: I1011 10:38:43.710601 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="fix-audit-permissions" Oct 11 10:38:43.710697 master-2 kubenswrapper[4776]: E1011 10:38:43.710625 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" containerName="oauth-openshift" Oct 11 10:38:43.710697 master-2 kubenswrapper[4776]: I1011 10:38:43.710635 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" containerName="oauth-openshift" Oct 11 10:38:43.710697 master-2 kubenswrapper[4776]: E1011 10:38:43.710648 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" Oct 11 10:38:43.710697 master-2 kubenswrapper[4776]: I1011 10:38:43.710657 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" Oct 11 10:38:43.710865 master-2 kubenswrapper[4776]: I1011 10:38:43.710834 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" containerName="oauth-apiserver" Oct 11 10:38:43.710898 master-2 kubenswrapper[4776]: I1011 10:38:43.710866 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" containerName="oauth-openshift" Oct 11 10:38:43.711472 master-2 kubenswrapper[4776]: I1011 10:38:43.711443 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.715309 master-2 kubenswrapper[4776]: I1011 10:38:43.715261 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 11 10:38:43.715423 master-2 kubenswrapper[4776]: I1011 10:38:43.715391 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 11 10:38:43.715542 master-2 kubenswrapper[4776]: I1011 10:38:43.715506 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 11 10:38:43.715755 master-2 kubenswrapper[4776]: I1011 10:38:43.715707 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 11 10:38:43.715989 master-2 kubenswrapper[4776]: I1011 10:38:43.715959 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 11 10:38:43.716031 master-2 kubenswrapper[4776]: I1011 10:38:43.715957 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 11 10:38:43.716091 master-2 kubenswrapper[4776]: I1011 10:38:43.716068 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 11 10:38:43.716132 master-2 kubenswrapper[4776]: I1011 10:38:43.715707 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-279hr" Oct 11 10:38:43.716568 master-2 kubenswrapper[4776]: I1011 10:38:43.716534 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 11 10:38:43.717553 master-2 kubenswrapper[4776]: I1011 10:38:43.717510 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 11 10:38:43.718238 master-2 kubenswrapper[4776]: I1011 10:38:43.718212 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 11 10:38:43.718238 master-2 kubenswrapper[4776]: I1011 10:38:43.718227 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 11 10:38:43.728696 master-2 kubenswrapper[4776]: I1011 10:38:43.728535 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fccd5ccc-lxq75"] Oct 11 10:38:43.731155 master-2 kubenswrapper[4776]: I1011 10:38:43.731098 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 11 10:38:43.740059 master-2 kubenswrapper[4776]: I1011 10:38:43.739862 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 11 10:38:43.833723 master-2 kubenswrapper[4776]: I1011 10:38:43.833451 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-session\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.833723 master-2 kubenswrapper[4776]: I1011 10:38:43.833502 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.833723 master-2 kubenswrapper[4776]: I1011 10:38:43.833522 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.833723 master-2 kubenswrapper[4776]: I1011 10:38:43.833548 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.833723 master-2 kubenswrapper[4776]: I1011 10:38:43.833569 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.833717 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-error\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.833873 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.833905 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-audit-dir\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.833945 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-audit-policies\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.833965 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g9ll\" (UniqueName: \"kubernetes.io/projected/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-kube-api-access-2g9ll\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.833993 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.834027 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.834084 master-2 kubenswrapper[4776]: I1011 10:38:43.834060 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-login\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935690 master-2 kubenswrapper[4776]: I1011 10:38:43.935622 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935690 master-2 kubenswrapper[4776]: I1011 10:38:43.935688 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-audit-dir\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935720 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g9ll\" (UniqueName: \"kubernetes.io/projected/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-kube-api-access-2g9ll\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935754 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-audit-policies\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935791 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935798 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-audit-dir\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935819 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935861 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-login\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935918 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-session\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935935 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.935970 master-2 kubenswrapper[4776]: I1011 10:38:43.935950 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.936271 master-2 kubenswrapper[4776]: I1011 10:38:43.936010 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.936271 master-2 kubenswrapper[4776]: I1011 10:38:43.936027 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.936271 master-2 kubenswrapper[4776]: I1011 10:38:43.936046 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-error\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.936977 master-2 kubenswrapper[4776]: I1011 10:38:43.936843 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-audit-policies\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.937103 master-2 kubenswrapper[4776]: I1011 10:38:43.937078 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.937945 master-2 kubenswrapper[4776]: I1011 10:38:43.937926 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.938712 master-2 kubenswrapper[4776]: I1011 10:38:43.938655 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.939303 master-2 kubenswrapper[4776]: I1011 10:38:43.939263 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.939400 master-2 kubenswrapper[4776]: I1011 10:38:43.939344 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-session\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.939621 master-2 kubenswrapper[4776]: I1011 10:38:43.939594 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.940004 master-2 kubenswrapper[4776]: I1011 10:38:43.939990 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.940127 master-2 kubenswrapper[4776]: I1011 10:38:43.940041 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-error\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.940775 master-2 kubenswrapper[4776]: I1011 10:38:43.940743 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-login\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.941343 master-2 kubenswrapper[4776]: I1011 10:38:43.941297 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:43.956649 master-2 kubenswrapper[4776]: I1011 10:38:43.956601 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g9ll\" (UniqueName: \"kubernetes.io/projected/6d7c74c7-9652-4fe6-93c3-667ec676ce1c-kube-api-access-2g9ll\") pod \"oauth-openshift-6fccd5ccc-lxq75\" (UID: \"6d7c74c7-9652-4fe6-93c3-667ec676ce1c\") " pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:44.064589 master-2 kubenswrapper[4776]: I1011 10:38:44.064133 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:44.068434 master-2 kubenswrapper[4776]: I1011 10:38:44.068369 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc095688-9188-4472-9c26-d4d286e5ef06" path="/var/lib/kubelet/pods/cc095688-9188-4472-9c26-d4d286e5ef06/volumes" Oct 11 10:38:44.069747 master-2 kubenswrapper[4776]: I1011 10:38:44.069708 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369" path="/var/lib/kubelet/pods/dd9ad6e0-e85a-41fb-a5cf-a8abeb46f369/volumes" Oct 11 10:38:44.192591 master-2 kubenswrapper[4776]: I1011 10:38:44.192508 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:38:44.192591 master-2 kubenswrapper[4776]: I1011 10:38:44.192574 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:38:44.497446 master-2 kubenswrapper[4776]: I1011 10:38:44.497414 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fccd5ccc-lxq75"] Oct 11 10:38:44.499401 master-2 kubenswrapper[4776]: W1011 10:38:44.499365 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d7c74c7_9652_4fe6_93c3_667ec676ce1c.slice/crio-b3852fd2b203546b8d8ce1fcc93dc21bc0ed0046d146a93029a6263a62ee1ba2 WatchSource:0}: Error finding container b3852fd2b203546b8d8ce1fcc93dc21bc0ed0046d146a93029a6263a62ee1ba2: Status 404 returned error can't find the container with id b3852fd2b203546b8d8ce1fcc93dc21bc0ed0046d146a93029a6263a62ee1ba2 Oct 11 10:38:44.514170 master-2 kubenswrapper[4776]: I1011 10:38:44.514073 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" event={"ID":"6d7c74c7-9652-4fe6-93c3-667ec676ce1c","Type":"ContainerStarted","Data":"b3852fd2b203546b8d8ce1fcc93dc21bc0ed0046d146a93029a6263a62ee1ba2"} Oct 11 10:38:45.522051 master-2 kubenswrapper[4776]: I1011 10:38:45.521993 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" event={"ID":"6d7c74c7-9652-4fe6-93c3-667ec676ce1c","Type":"ContainerStarted","Data":"9fc905dea7d19fa845edf55c43657fdffbe2e61a962e8c1c109ed88fc33e853c"} Oct 11 10:38:45.522967 master-2 kubenswrapper[4776]: I1011 10:38:45.522928 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:45.529265 master-2 kubenswrapper[4776]: I1011 10:38:45.529228 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" Oct 11 10:38:45.553269 master-2 kubenswrapper[4776]: I1011 10:38:45.553172 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6fccd5ccc-lxq75" podStartSLOduration=31.553147013 podStartE2EDuration="31.553147013s" podCreationTimestamp="2025-10-11 10:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:45.551521599 +0000 UTC m=+760.335948308" watchObservedRunningTime="2025-10-11 10:38:45.553147013 +0000 UTC m=+760.337573732" Oct 11 10:38:48.366055 master-2 kubenswrapper[4776]: I1011 10:38:48.366012 4776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:38:48.366786 master-2 kubenswrapper[4776]: I1011 10:38:48.366759 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler" containerID="cri-o://65701df136730297d07e924a9003107719afae6bc3e70126f7680b788afdcc01" gracePeriod=30 Oct 11 10:38:48.366908 master-2 kubenswrapper[4776]: I1011 10:38:48.366849 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-cert-syncer" containerID="cri-o://b1b4a56fa0152f300c6a99db97775492cfcdce4712ae78b30e2ac340b25efd8c" gracePeriod=30 Oct 11 10:38:48.367008 master-2 kubenswrapper[4776]: I1011 10:38:48.366776 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-recovery-controller" containerID="cri-o://3bed6e75ec56e4b27551229ac1cfed015f19cbbd37a51de7899f2a409f7f3107" gracePeriod=30 Oct 11 10:38:48.367790 master-2 kubenswrapper[4776]: I1011 10:38:48.367465 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:38:48.367790 master-2 kubenswrapper[4776]: E1011 10:38:48.367733 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler" Oct 11 10:38:48.367790 master-2 kubenswrapper[4776]: I1011 10:38:48.367751 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler" Oct 11 10:38:48.367790 master-2 kubenswrapper[4776]: E1011 10:38:48.367765 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-recovery-controller" Oct 11 10:38:48.367790 master-2 kubenswrapper[4776]: I1011 10:38:48.367773 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-recovery-controller" Oct 11 10:38:48.367790 master-2 kubenswrapper[4776]: E1011 10:38:48.367793 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="wait-for-host-port" Oct 11 10:38:48.368023 master-2 kubenswrapper[4776]: I1011 10:38:48.367801 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="wait-for-host-port" Oct 11 10:38:48.368023 master-2 kubenswrapper[4776]: E1011 10:38:48.367814 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-cert-syncer" Oct 11 10:38:48.368023 master-2 kubenswrapper[4776]: I1011 10:38:48.367822 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-cert-syncer" Oct 11 10:38:48.368023 master-2 kubenswrapper[4776]: I1011 10:38:48.367948 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-recovery-controller" Oct 11 10:38:48.368023 master-2 kubenswrapper[4776]: I1011 10:38:48.367963 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler" Oct 11 10:38:48.368023 master-2 kubenswrapper[4776]: I1011 10:38:48.367977 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerName="kube-scheduler-cert-syncer" Oct 11 10:38:48.497048 master-2 kubenswrapper[4776]: I1011 10:38:48.496945 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09a1584aa5985a5ff9600248bcf73e77-cert-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"09a1584aa5985a5ff9600248bcf73e77\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.497294 master-2 kubenswrapper[4776]: I1011 10:38:48.497081 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09a1584aa5985a5ff9600248bcf73e77-resource-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"09a1584aa5985a5ff9600248bcf73e77\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.544538 master-2 kubenswrapper[4776]: I1011 10:38:48.544467 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-2_f26cf13b1c8c4f1b57c0ac506ef256a4/kube-scheduler-cert-syncer/0.log" Oct 11 10:38:48.545496 master-2 kubenswrapper[4776]: I1011 10:38:48.545447 4776 generic.go:334] "Generic (PLEG): container finished" podID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerID="3bed6e75ec56e4b27551229ac1cfed015f19cbbd37a51de7899f2a409f7f3107" exitCode=0 Oct 11 10:38:48.545496 master-2 kubenswrapper[4776]: I1011 10:38:48.545487 4776 generic.go:334] "Generic (PLEG): container finished" podID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerID="b1b4a56fa0152f300c6a99db97775492cfcdce4712ae78b30e2ac340b25efd8c" exitCode=2 Oct 11 10:38:48.598453 master-2 kubenswrapper[4776]: I1011 10:38:48.598385 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09a1584aa5985a5ff9600248bcf73e77-resource-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"09a1584aa5985a5ff9600248bcf73e77\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.598641 master-2 kubenswrapper[4776]: I1011 10:38:48.598530 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09a1584aa5985a5ff9600248bcf73e77-cert-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"09a1584aa5985a5ff9600248bcf73e77\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.598641 master-2 kubenswrapper[4776]: I1011 10:38:48.598540 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/09a1584aa5985a5ff9600248bcf73e77-resource-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"09a1584aa5985a5ff9600248bcf73e77\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.598754 master-2 kubenswrapper[4776]: I1011 10:38:48.598636 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/09a1584aa5985a5ff9600248bcf73e77-cert-dir\") pod \"openshift-kube-scheduler-master-2\" (UID: \"09a1584aa5985a5ff9600248bcf73e77\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.804095 master-2 kubenswrapper[4776]: I1011 10:38:48.804049 4776 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-2 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" start-of-body= Oct 11 10:38:48.804195 master-2 kubenswrapper[4776]: I1011 10:38:48.804111 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" podUID="c76a7758-6688-4e6c-a01a-c3e29db3c134" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" Oct 11 10:38:48.884878 master-2 kubenswrapper[4776]: I1011 10:38:48.884782 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-2_f26cf13b1c8c4f1b57c0ac506ef256a4/kube-scheduler-cert-syncer/0.log" Oct 11 10:38:48.886513 master-2 kubenswrapper[4776]: I1011 10:38:48.886462 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:48.891698 master-2 kubenswrapper[4776]: I1011 10:38:48.891635 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" oldPodUID="f26cf13b1c8c4f1b57c0ac506ef256a4" podUID="09a1584aa5985a5ff9600248bcf73e77" Oct 11 10:38:49.003223 master-2 kubenswrapper[4776]: I1011 10:38:49.003063 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-cert-dir\") pod \"f26cf13b1c8c4f1b57c0ac506ef256a4\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " Oct 11 10:38:49.003223 master-2 kubenswrapper[4776]: I1011 10:38:49.003152 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-resource-dir\") pod \"f26cf13b1c8c4f1b57c0ac506ef256a4\" (UID: \"f26cf13b1c8c4f1b57c0ac506ef256a4\") " Oct 11 10:38:49.003565 master-2 kubenswrapper[4776]: I1011 10:38:49.003261 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f26cf13b1c8c4f1b57c0ac506ef256a4" (UID: "f26cf13b1c8c4f1b57c0ac506ef256a4"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:49.003565 master-2 kubenswrapper[4776]: I1011 10:38:49.003528 4776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-resource-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:49.003992 master-2 kubenswrapper[4776]: I1011 10:38:49.003931 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f26cf13b1c8c4f1b57c0ac506ef256a4" (UID: "f26cf13b1c8c4f1b57c0ac506ef256a4"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:49.104949 master-2 kubenswrapper[4776]: I1011 10:38:49.104863 4776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f26cf13b1c8c4f1b57c0ac506ef256a4-cert-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:49.193212 master-2 kubenswrapper[4776]: I1011 10:38:49.193166 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:38:49.193487 master-2 kubenswrapper[4776]: I1011 10:38:49.193462 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:38:49.556889 master-2 kubenswrapper[4776]: I1011 10:38:49.556742 4776 generic.go:334] "Generic (PLEG): container finished" podID="8755f64d-7ff8-4df3-ae55-c1154ba02830" containerID="9153289ceddc1077d563995a41dced39a8a3e20ad2f9b47e07f851d3852a7efc" exitCode=0 Oct 11 10:38:49.556889 master-2 kubenswrapper[4776]: I1011 10:38:49.556865 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-2" event={"ID":"8755f64d-7ff8-4df3-ae55-c1154ba02830","Type":"ContainerDied","Data":"9153289ceddc1077d563995a41dced39a8a3e20ad2f9b47e07f851d3852a7efc"} Oct 11 10:38:49.560295 master-2 kubenswrapper[4776]: I1011 10:38:49.560236 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-2_f26cf13b1c8c4f1b57c0ac506ef256a4/kube-scheduler-cert-syncer/0.log" Oct 11 10:38:49.561938 master-2 kubenswrapper[4776]: I1011 10:38:49.561860 4776 generic.go:334] "Generic (PLEG): container finished" podID="f26cf13b1c8c4f1b57c0ac506ef256a4" containerID="65701df136730297d07e924a9003107719afae6bc3e70126f7680b788afdcc01" exitCode=0 Oct 11 10:38:49.562091 master-2 kubenswrapper[4776]: I1011 10:38:49.561946 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ed01188c7283cfff70d6c5cb4504465f9e9f1843a1b8c89bb6c36df04a63ac6" Oct 11 10:38:49.562091 master-2 kubenswrapper[4776]: I1011 10:38:49.562066 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:49.595403 master-2 kubenswrapper[4776]: I1011 10:38:49.594383 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" oldPodUID="f26cf13b1c8c4f1b57c0ac506ef256a4" podUID="09a1584aa5985a5ff9600248bcf73e77" Oct 11 10:38:49.603504 master-2 kubenswrapper[4776]: I1011 10:38:49.603418 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" oldPodUID="f26cf13b1c8c4f1b57c0ac506ef256a4" podUID="09a1584aa5985a5ff9600248bcf73e77" Oct 11 10:38:50.023155 master-2 kubenswrapper[4776]: I1011 10:38:50.023080 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:50.023155 master-2 kubenswrapper[4776]: I1011 10:38:50.023151 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:38:50.025458 master-2 kubenswrapper[4776]: I1011 10:38:50.025409 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:38:50.025541 master-2 kubenswrapper[4776]: I1011 10:38:50.025476 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:38:50.066715 master-2 kubenswrapper[4776]: I1011 10:38:50.066643 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26cf13b1c8c4f1b57c0ac506ef256a4" path="/var/lib/kubelet/pods/f26cf13b1c8c4f1b57c0ac506ef256a4/volumes" Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: I1011 10:38:50.264761 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:38:50.264836 master-2 kubenswrapper[4776]: I1011 10:38:50.264829 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:38:50.939725 master-2 kubenswrapper[4776]: I1011 10:38:50.939666 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:51.036017 master-2 kubenswrapper[4776]: I1011 10:38:51.035945 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8755f64d-7ff8-4df3-ae55-c1154ba02830" (UID: "8755f64d-7ff8-4df3-ae55-c1154ba02830"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:51.036925 master-2 kubenswrapper[4776]: I1011 10:38:51.036881 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-kubelet-dir\") pod \"8755f64d-7ff8-4df3-ae55-c1154ba02830\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " Oct 11 10:38:51.036998 master-2 kubenswrapper[4776]: I1011 10:38:51.036973 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8755f64d-7ff8-4df3-ae55-c1154ba02830-kube-api-access\") pod \"8755f64d-7ff8-4df3-ae55-c1154ba02830\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " Oct 11 10:38:51.037047 master-2 kubenswrapper[4776]: I1011 10:38:51.037014 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-var-lock\") pod \"8755f64d-7ff8-4df3-ae55-c1154ba02830\" (UID: \"8755f64d-7ff8-4df3-ae55-c1154ba02830\") " Oct 11 10:38:51.037235 master-2 kubenswrapper[4776]: I1011 10:38:51.037209 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-var-lock" (OuterVolumeSpecName: "var-lock") pod "8755f64d-7ff8-4df3-ae55-c1154ba02830" (UID: "8755f64d-7ff8-4df3-ae55-c1154ba02830"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:38:51.039383 master-2 kubenswrapper[4776]: I1011 10:38:51.037541 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:51.039383 master-2 kubenswrapper[4776]: I1011 10:38:51.037587 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8755f64d-7ff8-4df3-ae55-c1154ba02830-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:51.040127 master-2 kubenswrapper[4776]: I1011 10:38:51.039874 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8755f64d-7ff8-4df3-ae55-c1154ba02830-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8755f64d-7ff8-4df3-ae55-c1154ba02830" (UID: "8755f64d-7ff8-4df3-ae55-c1154ba02830"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:38:51.138751 master-2 kubenswrapper[4776]: I1011 10:38:51.138694 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8755f64d-7ff8-4df3-ae55-c1154ba02830-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:38:51.575910 master-2 kubenswrapper[4776]: I1011 10:38:51.575859 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-2" event={"ID":"8755f64d-7ff8-4df3-ae55-c1154ba02830","Type":"ContainerDied","Data":"778cb332dda0b2164f4a18cca4a33c538ed0cde607af7400e50992351c45e610"} Oct 11 10:38:51.576215 master-2 kubenswrapper[4776]: I1011 10:38:51.576195 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="778cb332dda0b2164f4a18cca4a33c538ed0cde607af7400e50992351c45e610" Oct 11 10:38:51.576304 master-2 kubenswrapper[4776]: I1011 10:38:51.575910 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-2" Oct 11 10:38:51.642891 master-2 kubenswrapper[4776]: I1011 10:38:51.642846 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-2"] Oct 11 10:38:51.643302 master-2 kubenswrapper[4776]: E1011 10:38:51.643288 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8755f64d-7ff8-4df3-ae55-c1154ba02830" containerName="installer" Oct 11 10:38:51.643370 master-2 kubenswrapper[4776]: I1011 10:38:51.643361 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8755f64d-7ff8-4df3-ae55-c1154ba02830" containerName="installer" Oct 11 10:38:51.643527 master-2 kubenswrapper[4776]: I1011 10:38:51.643515 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="8755f64d-7ff8-4df3-ae55-c1154ba02830" containerName="installer" Oct 11 10:38:51.644063 master-2 kubenswrapper[4776]: I1011 10:38:51.644047 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.646935 master-2 kubenswrapper[4776]: I1011 10:38:51.646917 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-djvlq" Oct 11 10:38:51.657720 master-2 kubenswrapper[4776]: I1011 10:38:51.657658 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-2"] Oct 11 10:38:51.746403 master-2 kubenswrapper[4776]: I1011 10:38:51.746349 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.746609 master-2 kubenswrapper[4776]: I1011 10:38:51.746439 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kube-api-access\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.746609 master-2 kubenswrapper[4776]: I1011 10:38:51.746460 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-var-lock\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.847264 master-2 kubenswrapper[4776]: I1011 10:38:51.847133 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.847264 master-2 kubenswrapper[4776]: I1011 10:38:51.847221 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kube-api-access\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.847264 master-2 kubenswrapper[4776]: I1011 10:38:51.847238 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-var-lock\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.847488 master-2 kubenswrapper[4776]: I1011 10:38:51.847299 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-var-lock\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.847488 master-2 kubenswrapper[4776]: I1011 10:38:51.847306 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kubelet-dir\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.864595 master-2 kubenswrapper[4776]: I1011 10:38:51.864533 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kube-api-access\") pod \"installer-5-master-2\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:51.974597 master-2 kubenswrapper[4776]: I1011 10:38:51.974518 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:38:52.365563 master-2 kubenswrapper[4776]: I1011 10:38:52.365492 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-2"] Oct 11 10:38:52.368864 master-2 kubenswrapper[4776]: W1011 10:38:52.368817 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5c59b4d8_fa7a_4c50_b130_8b4857359efa.slice/crio-966a418bbbd7a4398464bc0352257041c54b7d3624ba848b9d0be21d900f1a4b WatchSource:0}: Error finding container 966a418bbbd7a4398464bc0352257041c54b7d3624ba848b9d0be21d900f1a4b: Status 404 returned error can't find the container with id 966a418bbbd7a4398464bc0352257041c54b7d3624ba848b9d0be21d900f1a4b Oct 11 10:38:52.582151 master-2 kubenswrapper[4776]: I1011 10:38:52.582088 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-2" event={"ID":"5c59b4d8-fa7a-4c50-b130-8b4857359efa","Type":"ContainerStarted","Data":"966a418bbbd7a4398464bc0352257041c54b7d3624ba848b9d0be21d900f1a4b"} Oct 11 10:38:53.593881 master-2 kubenswrapper[4776]: I1011 10:38:53.593789 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-2" event={"ID":"5c59b4d8-fa7a-4c50-b130-8b4857359efa","Type":"ContainerStarted","Data":"b6fb2721b520ebe1cede0be4cfc4189e99d8b75e5efbec478e75775987a6a914"} Oct 11 10:38:53.622233 master-2 kubenswrapper[4776]: I1011 10:38:53.622098 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-2" podStartSLOduration=2.622071505 podStartE2EDuration="2.622071505s" podCreationTimestamp="2025-10-11 10:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:38:53.620601644 +0000 UTC m=+768.405028393" watchObservedRunningTime="2025-10-11 10:38:53.622071505 +0000 UTC m=+768.406498224" Oct 11 10:38:53.804346 master-2 kubenswrapper[4776]: I1011 10:38:53.804258 4776 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-2 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" start-of-body= Oct 11 10:38:53.804612 master-2 kubenswrapper[4776]: I1011 10:38:53.804348 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" podUID="c76a7758-6688-4e6c-a01a-c3e29db3c134" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" Oct 11 10:38:54.193574 master-2 kubenswrapper[4776]: I1011 10:38:54.193493 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:38:54.193903 master-2 kubenswrapper[4776]: I1011 10:38:54.193579 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:38:58.804040 master-2 kubenswrapper[4776]: I1011 10:38:58.803903 4776 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-2 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" start-of-body= Oct 11 10:38:58.805143 master-2 kubenswrapper[4776]: I1011 10:38:58.804028 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" podUID="c76a7758-6688-4e6c-a01a-c3e29db3c134" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" Oct 11 10:38:58.805143 master-2 kubenswrapper[4776]: I1011 10:38:58.804240 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:38:58.805329 master-2 kubenswrapper[4776]: I1011 10:38:58.805265 4776 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-2 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" start-of-body= Oct 11 10:38:58.805456 master-2 kubenswrapper[4776]: I1011 10:38:58.805362 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" podUID="c76a7758-6688-4e6c-a01a-c3e29db3c134" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10259/healthz\": dial tcp 192.168.34.12:10259: connect: connection refused" Oct 11 10:38:59.063238 master-2 kubenswrapper[4776]: I1011 10:38:59.063115 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:59.095175 master-2 kubenswrapper[4776]: I1011 10:38:59.095131 4776 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podUID="ef6d4604-7cf7-4d1f-a697-4dc720b4a516" Oct 11 10:38:59.095175 master-2 kubenswrapper[4776]: I1011 10:38:59.095171 4776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podUID="ef6d4604-7cf7-4d1f-a697-4dc720b4a516" Oct 11 10:38:59.127849 master-2 kubenswrapper[4776]: I1011 10:38:59.127802 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:38:59.137292 master-2 kubenswrapper[4776]: I1011 10:38:59.137246 4776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:59.140299 master-2 kubenswrapper[4776]: I1011 10:38:59.140244 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:38:59.152573 master-2 kubenswrapper[4776]: I1011 10:38:59.152537 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:38:59.156662 master-2 kubenswrapper[4776]: I1011 10:38:59.156627 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-2"] Oct 11 10:38:59.192786 master-2 kubenswrapper[4776]: I1011 10:38:59.192709 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:38:59.192786 master-2 kubenswrapper[4776]: I1011 10:38:59.192761 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:38:59.646014 master-2 kubenswrapper[4776]: I1011 10:38:59.645863 4776 generic.go:334] "Generic (PLEG): container finished" podID="09a1584aa5985a5ff9600248bcf73e77" containerID="2d997c2d4c42e15e75dcfc064346afe164a2ba45f92c9b53915dda78c32c141c" exitCode=0 Oct 11 10:38:59.646014 master-2 kubenswrapper[4776]: I1011 10:38:59.645907 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"09a1584aa5985a5ff9600248bcf73e77","Type":"ContainerDied","Data":"2d997c2d4c42e15e75dcfc064346afe164a2ba45f92c9b53915dda78c32c141c"} Oct 11 10:38:59.646014 master-2 kubenswrapper[4776]: I1011 10:38:59.645935 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"09a1584aa5985a5ff9600248bcf73e77","Type":"ContainerStarted","Data":"99f0da196ee9f8a5939f45e2bc1ee4e75e90de563aa9ac9e5f2697426085263c"} Oct 11 10:39:00.025504 master-2 kubenswrapper[4776]: I1011 10:39:00.023452 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:39:00.025504 master-2 kubenswrapper[4776]: I1011 10:39:00.023509 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:39:00.556404 master-2 kubenswrapper[4776]: I1011 10:39:00.556283 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-5xgzs"] Oct 11 10:39:00.557323 master-2 kubenswrapper[4776]: I1011 10:39:00.557289 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.560449 master-2 kubenswrapper[4776]: I1011 10:39:00.560408 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-zlnjr" Oct 11 10:39:00.560449 master-2 kubenswrapper[4776]: I1011 10:39:00.560428 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 10:39:00.560613 master-2 kubenswrapper[4776]: I1011 10:39:00.560552 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 10:39:00.560720 master-2 kubenswrapper[4776]: I1011 10:39:00.560642 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 10:39:00.560790 master-2 kubenswrapper[4776]: I1011 10:39:00.560735 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 10:39:00.560931 master-2 kubenswrapper[4776]: I1011 10:39:00.560887 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 10:39:00.563574 master-2 kubenswrapper[4776]: I1011 10:39:00.563541 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 10:39:00.563654 master-2 kubenswrapper[4776]: I1011 10:39:00.563573 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 10:39:00.563976 master-2 kubenswrapper[4776]: I1011 10:39:00.563952 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 10:39:00.582798 master-2 kubenswrapper[4776]: I1011 10:39:00.582721 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-5xgzs"] Oct 11 10:39:00.617612 master-2 kubenswrapper[4776]: I1011 10:39:00.617540 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jl6f8"] Oct 11 10:39:00.618549 master-2 kubenswrapper[4776]: I1011 10:39:00.618515 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.621787 master-2 kubenswrapper[4776]: I1011 10:39:00.621765 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-cs6lc" Oct 11 10:39:00.621971 master-2 kubenswrapper[4776]: I1011 10:39:00.621954 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 11 10:39:00.657341 master-2 kubenswrapper[4776]: I1011 10:39:00.657292 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"09a1584aa5985a5ff9600248bcf73e77","Type":"ContainerStarted","Data":"6b08c60868474c9e39e0d7cccbaaeebdd877d3e382a64aea2678c63dee8f27b9"} Oct 11 10:39:00.657341 master-2 kubenswrapper[4776]: I1011 10:39:00.657338 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"09a1584aa5985a5ff9600248bcf73e77","Type":"ContainerStarted","Data":"442c453f081ca449416e62f4096c8ffc17314444a4aee0a5fb03fe752c9d03d5"} Oct 11 10:39:00.657341 master-2 kubenswrapper[4776]: I1011 10:39:00.657350 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" event={"ID":"09a1584aa5985a5ff9600248bcf73e77","Type":"ContainerStarted","Data":"ce1c9a1bb392147b36f6fb94d3eaa492b2c19737117d1cee7cef002c354e7d3f"} Oct 11 10:39:00.657584 master-2 kubenswrapper[4776]: I1011 10:39:00.657494 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:39:00.680544 master-2 kubenswrapper[4776]: I1011 10:39:00.680470 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" podStartSLOduration=1.680450928 podStartE2EDuration="1.680450928s" podCreationTimestamp="2025-10-11 10:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:00.678028732 +0000 UTC m=+775.462455441" watchObservedRunningTime="2025-10-11 10:39:00.680450928 +0000 UTC m=+775.464877627" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.698934 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-serving-cert\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.698990 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-trusted-ca-bundle\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699011 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-encryption-config\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699034 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shjw5\" (UniqueName: \"kubernetes.io/projected/b1a4fd85-5da5-4697-b524-a68be3d018cf-kube-api-access-shjw5\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699070 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-dir\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699085 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-client\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699103 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cjsj\" (UniqueName: \"kubernetes.io/projected/407e7df9-fbe8-44b1-8dde-bafa356e904c-kube-api-access-4cjsj\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699123 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b1a4fd85-5da5-4697-b524-a68be3d018cf-serviceca\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699151 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-serving-ca\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699174 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-policies\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.699703 master-2 kubenswrapper[4776]: I1011 10:39:00.699190 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1a4fd85-5da5-4697-b524-a68be3d018cf-host\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.800855 master-2 kubenswrapper[4776]: I1011 10:39:00.800773 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-serving-ca\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.800855 master-2 kubenswrapper[4776]: I1011 10:39:00.800852 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-policies\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801104 master-2 kubenswrapper[4776]: I1011 10:39:00.800891 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1a4fd85-5da5-4697-b524-a68be3d018cf-host\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.801104 master-2 kubenswrapper[4776]: I1011 10:39:00.800956 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-serving-cert\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801104 master-2 kubenswrapper[4776]: I1011 10:39:00.800995 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-trusted-ca-bundle\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801104 master-2 kubenswrapper[4776]: I1011 10:39:00.801018 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-encryption-config\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801104 master-2 kubenswrapper[4776]: I1011 10:39:00.801048 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shjw5\" (UniqueName: \"kubernetes.io/projected/b1a4fd85-5da5-4697-b524-a68be3d018cf-kube-api-access-shjw5\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.801104 master-2 kubenswrapper[4776]: I1011 10:39:00.801095 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-dir\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801284 master-2 kubenswrapper[4776]: I1011 10:39:00.801122 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-client\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801284 master-2 kubenswrapper[4776]: I1011 10:39:00.801153 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cjsj\" (UniqueName: \"kubernetes.io/projected/407e7df9-fbe8-44b1-8dde-bafa356e904c-kube-api-access-4cjsj\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.801284 master-2 kubenswrapper[4776]: I1011 10:39:00.801185 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b1a4fd85-5da5-4697-b524-a68be3d018cf-serviceca\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.801456 master-2 kubenswrapper[4776]: I1011 10:39:00.801220 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1a4fd85-5da5-4697-b524-a68be3d018cf-host\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.801560 master-2 kubenswrapper[4776]: I1011 10:39:00.801224 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-dir\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.802054 master-2 kubenswrapper[4776]: I1011 10:39:00.802018 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b1a4fd85-5da5-4697-b524-a68be3d018cf-serviceca\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.802148 master-2 kubenswrapper[4776]: I1011 10:39:00.802116 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-trusted-ca-bundle\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.802754 master-2 kubenswrapper[4776]: I1011 10:39:00.802649 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-serving-ca\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.803157 master-2 kubenswrapper[4776]: I1011 10:39:00.803129 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-policies\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.804585 master-2 kubenswrapper[4776]: I1011 10:39:00.804538 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-client\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.804653 master-2 kubenswrapper[4776]: I1011 10:39:00.804625 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-serving-cert\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.804716 master-2 kubenswrapper[4776]: I1011 10:39:00.804641 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-encryption-config\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.821703 master-2 kubenswrapper[4776]: I1011 10:39:00.821580 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shjw5\" (UniqueName: \"kubernetes.io/projected/b1a4fd85-5da5-4697-b524-a68be3d018cf-kube-api-access-shjw5\") pod \"node-ca-jl6f8\" (UID: \"b1a4fd85-5da5-4697-b524-a68be3d018cf\") " pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:00.824170 master-2 kubenswrapper[4776]: I1011 10:39:00.824122 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cjsj\" (UniqueName: \"kubernetes.io/projected/407e7df9-fbe8-44b1-8dde-bafa356e904c-kube-api-access-4cjsj\") pod \"apiserver-656768b4df-5xgzs\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.885464 master-2 kubenswrapper[4776]: I1011 10:39:00.885357 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:00.938414 master-2 kubenswrapper[4776]: I1011 10:39:00.934274 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jl6f8" Oct 11 10:39:01.328096 master-2 kubenswrapper[4776]: I1011 10:39:01.328025 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-5xgzs"] Oct 11 10:39:01.665800 master-2 kubenswrapper[4776]: I1011 10:39:01.665741 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jl6f8" event={"ID":"b1a4fd85-5da5-4697-b524-a68be3d018cf","Type":"ContainerStarted","Data":"ce1be7e853a84ffbeb127e872f1d29dc22c8f25bb1faf113830334a76d8ee276"} Oct 11 10:39:01.668849 master-2 kubenswrapper[4776]: I1011 10:39:01.668097 4776 generic.go:334] "Generic (PLEG): container finished" podID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerID="e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960" exitCode=0 Oct 11 10:39:01.668849 master-2 kubenswrapper[4776]: I1011 10:39:01.668163 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" event={"ID":"407e7df9-fbe8-44b1-8dde-bafa356e904c","Type":"ContainerDied","Data":"e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960"} Oct 11 10:39:01.668849 master-2 kubenswrapper[4776]: I1011 10:39:01.668188 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" event={"ID":"407e7df9-fbe8-44b1-8dde-bafa356e904c","Type":"ContainerStarted","Data":"4696d703bfc528a3bf9bd99fc217e6dc2e1faa3cb905d36cd446e1df3ecf761e"} Oct 11 10:39:02.675075 master-2 kubenswrapper[4776]: I1011 10:39:02.675017 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" event={"ID":"407e7df9-fbe8-44b1-8dde-bafa356e904c","Type":"ContainerStarted","Data":"0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee"} Oct 11 10:39:02.705469 master-2 kubenswrapper[4776]: I1011 10:39:02.705373 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podStartSLOduration=75.705351278 podStartE2EDuration="1m15.705351278s" podCreationTimestamp="2025-10-11 10:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:02.700917988 +0000 UTC m=+777.485344727" watchObservedRunningTime="2025-10-11 10:39:02.705351278 +0000 UTC m=+777.489777987" Oct 11 10:39:03.681177 master-2 kubenswrapper[4776]: I1011 10:39:03.681118 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jl6f8" event={"ID":"b1a4fd85-5da5-4697-b524-a68be3d018cf","Type":"ContainerStarted","Data":"4665e07708c514b802cdfed7903bdfb649781ab12f1d4e332f7c69e8568925eb"} Oct 11 10:39:03.716196 master-2 kubenswrapper[4776]: I1011 10:39:03.716128 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jl6f8" podStartSLOduration=12.830241438 podStartE2EDuration="14.716112191s" podCreationTimestamp="2025-10-11 10:38:49 +0000 UTC" firstStartedPulling="2025-10-11 10:39:00.987960502 +0000 UTC m=+775.772387211" lastFinishedPulling="2025-10-11 10:39:02.873831245 +0000 UTC m=+777.658257964" observedRunningTime="2025-10-11 10:39:03.713627394 +0000 UTC m=+778.498054103" watchObservedRunningTime="2025-10-11 10:39:03.716112191 +0000 UTC m=+778.500538900" Oct 11 10:39:03.808213 master-2 kubenswrapper[4776]: I1011 10:39:03.808157 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-2" Oct 11 10:39:04.193219 master-2 kubenswrapper[4776]: I1011 10:39:04.193163 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:04.193584 master-2 kubenswrapper[4776]: I1011 10:39:04.193244 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:05.885937 master-2 kubenswrapper[4776]: I1011 10:39:05.885862 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:05.886530 master-2 kubenswrapper[4776]: I1011 10:39:05.886188 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:05.894124 master-2 kubenswrapper[4776]: I1011 10:39:05.894090 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:06.701170 master-2 kubenswrapper[4776]: I1011 10:39:06.701085 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:39:09.193309 master-2 kubenswrapper[4776]: I1011 10:39:09.193234 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:09.193309 master-2 kubenswrapper[4776]: I1011 10:39:09.193294 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:10.024073 master-2 kubenswrapper[4776]: I1011 10:39:10.023987 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:39:10.024073 master-2 kubenswrapper[4776]: I1011 10:39:10.024053 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: I1011 10:39:10.266657 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:39:10.266803 master-2 kubenswrapper[4776]: I1011 10:39:10.266779 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:39:14.192662 master-2 kubenswrapper[4776]: I1011 10:39:14.192592 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:14.192662 master-2 kubenswrapper[4776]: I1011 10:39:14.192640 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:14.976983 master-2 kubenswrapper[4776]: I1011 10:39:14.976907 4776 scope.go:117] "RemoveContainer" containerID="e726f4cf3755426805ed7e9bd7973871407e8c8b66372a8c807859b61c3bd2f3" Oct 11 10:39:15.000861 master-2 kubenswrapper[4776]: I1011 10:39:15.000801 4776 scope.go:117] "RemoveContainer" containerID="90d6acbfbe353ba98c33d9d9275a952ddeabac687eed9e519947f935a2f44edf" Oct 11 10:39:15.016745 master-2 kubenswrapper[4776]: I1011 10:39:15.016703 4776 scope.go:117] "RemoveContainer" containerID="e2dd8c36e185fe9780ea6d5b908ce2843fc734e8fa6bcfa6808d36a4d7c261b0" Oct 11 10:39:19.193101 master-2 kubenswrapper[4776]: I1011 10:39:19.193027 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:19.193998 master-2 kubenswrapper[4776]: I1011 10:39:19.193105 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:20.024622 master-2 kubenswrapper[4776]: I1011 10:39:20.024553 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:39:20.024622 master-2 kubenswrapper[4776]: I1011 10:39:20.024609 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:39:24.193393 master-2 kubenswrapper[4776]: I1011 10:39:24.193299 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:24.193991 master-2 kubenswrapper[4776]: I1011 10:39:24.193399 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:25.284870 master-2 kubenswrapper[4776]: I1011 10:39:25.284786 4776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:39:25.286075 master-2 kubenswrapper[4776]: I1011 10:39:25.285041 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager" containerID="cri-o://10893b6ff26cfe0860be9681aea4e014407f210edeb0807d7d50c1f9edb2d910" gracePeriod=30 Oct 11 10:39:25.286075 master-2 kubenswrapper[4776]: I1011 10:39:25.285117 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://0f23b40fdead0283107ab65c161704aa7acddce7d9fe04f98e8ea9ab7f03a4cd" gracePeriod=30 Oct 11 10:39:25.286075 master-2 kubenswrapper[4776]: I1011 10:39:25.285175 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://1c66c420f42b218cae752c5f11d7d84132ff9087b3b755852c6f533f1acaeece" gracePeriod=30 Oct 11 10:39:25.286075 master-2 kubenswrapper[4776]: I1011 10:39:25.285180 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="4f88b73b0d121e855641834122063be9" containerName="cluster-policy-controller" containerID="cri-o://f3c0c4b66c129c923e0c2ad907f82ab3a83141f8e7f3805af522d3e693204962" gracePeriod=30 Oct 11 10:39:25.287515 master-2 kubenswrapper[4776]: I1011 10:39:25.287356 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: E1011 10:39:25.287563 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: I1011 10:39:25.287576 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: E1011 10:39:25.287592 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f88b73b0d121e855641834122063be9" containerName="cluster-policy-controller" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: I1011 10:39:25.287630 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f88b73b0d121e855641834122063be9" containerName="cluster-policy-controller" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: E1011 10:39:25.287647 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-cert-syncer" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: I1011 10:39:25.287655 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-cert-syncer" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: E1011 10:39:25.287693 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-recovery-controller" Oct 11 10:39:25.287659 master-2 kubenswrapper[4776]: I1011 10:39:25.287703 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-recovery-controller" Oct 11 10:39:25.288413 master-2 kubenswrapper[4776]: I1011 10:39:25.287829 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f88b73b0d121e855641834122063be9" containerName="cluster-policy-controller" Oct 11 10:39:25.288413 master-2 kubenswrapper[4776]: I1011 10:39:25.287844 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-cert-syncer" Oct 11 10:39:25.288413 master-2 kubenswrapper[4776]: I1011 10:39:25.287866 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager-recovery-controller" Oct 11 10:39:25.288413 master-2 kubenswrapper[4776]: I1011 10:39:25.287875 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f88b73b0d121e855641834122063be9" containerName="kube-controller-manager" Oct 11 10:39:25.377483 master-2 kubenswrapper[4776]: I1011 10:39:25.377395 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.377712 master-2 kubenswrapper[4776]: I1011 10:39:25.377503 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.455924 master-2 kubenswrapper[4776]: I1011 10:39:25.455350 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_4f88b73b0d121e855641834122063be9/kube-controller-manager-cert-syncer/0.log" Oct 11 10:39:25.456479 master-2 kubenswrapper[4776]: I1011 10:39:25.456444 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.462277 master-2 kubenswrapper[4776]: I1011 10:39:25.462228 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="4f88b73b0d121e855641834122063be9" podUID="2dd82f838b5636582534da82a3996ea6" Oct 11 10:39:25.478096 master-2 kubenswrapper[4776]: I1011 10:39:25.477960 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-cert-dir\") pod \"4f88b73b0d121e855641834122063be9\" (UID: \"4f88b73b0d121e855641834122063be9\") " Oct 11 10:39:25.478292 master-2 kubenswrapper[4776]: I1011 10:39:25.478118 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-resource-dir\") pod \"4f88b73b0d121e855641834122063be9\" (UID: \"4f88b73b0d121e855641834122063be9\") " Oct 11 10:39:25.478292 master-2 kubenswrapper[4776]: I1011 10:39:25.478195 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "4f88b73b0d121e855641834122063be9" (UID: "4f88b73b0d121e855641834122063be9"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:25.478376 master-2 kubenswrapper[4776]: I1011 10:39:25.478309 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "4f88b73b0d121e855641834122063be9" (UID: "4f88b73b0d121e855641834122063be9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:25.478486 master-2 kubenswrapper[4776]: I1011 10:39:25.478438 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.478573 master-2 kubenswrapper[4776]: I1011 10:39:25.478525 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.478611 master-2 kubenswrapper[4776]: I1011 10:39:25.478572 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.478639 master-2 kubenswrapper[4776]: I1011 10:39:25.478607 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.478832 master-2 kubenswrapper[4776]: I1011 10:39:25.478793 4776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-resource-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:25.478832 master-2 kubenswrapper[4776]: I1011 10:39:25.478816 4776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4f88b73b0d121e855641834122063be9-cert-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:25.809760 master-2 kubenswrapper[4776]: I1011 10:39:25.809702 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_4f88b73b0d121e855641834122063be9/kube-controller-manager-cert-syncer/0.log" Oct 11 10:39:25.811431 master-2 kubenswrapper[4776]: I1011 10:39:25.810877 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f88b73b0d121e855641834122063be9" containerID="1c66c420f42b218cae752c5f11d7d84132ff9087b3b755852c6f533f1acaeece" exitCode=0 Oct 11 10:39:25.811431 master-2 kubenswrapper[4776]: I1011 10:39:25.810909 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f88b73b0d121e855641834122063be9" containerID="0f23b40fdead0283107ab65c161704aa7acddce7d9fe04f98e8ea9ab7f03a4cd" exitCode=2 Oct 11 10:39:25.811431 master-2 kubenswrapper[4776]: I1011 10:39:25.810921 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f88b73b0d121e855641834122063be9" containerID="f3c0c4b66c129c923e0c2ad907f82ab3a83141f8e7f3805af522d3e693204962" exitCode=0 Oct 11 10:39:25.811431 master-2 kubenswrapper[4776]: I1011 10:39:25.810955 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f88b73b0d121e855641834122063be9" containerID="10893b6ff26cfe0860be9681aea4e014407f210edeb0807d7d50c1f9edb2d910" exitCode=0 Oct 11 10:39:25.811431 master-2 kubenswrapper[4776]: I1011 10:39:25.810958 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:25.811431 master-2 kubenswrapper[4776]: I1011 10:39:25.811064 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="916ddd0f284b303e9bb4961df811012eddf3484459c699dfd12d49002c642155" Oct 11 10:39:25.813055 master-2 kubenswrapper[4776]: I1011 10:39:25.813019 4776 generic.go:334] "Generic (PLEG): container finished" podID="5c59b4d8-fa7a-4c50-b130-8b4857359efa" containerID="b6fb2721b520ebe1cede0be4cfc4189e99d8b75e5efbec478e75775987a6a914" exitCode=0 Oct 11 10:39:25.813055 master-2 kubenswrapper[4776]: I1011 10:39:25.813045 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-2" event={"ID":"5c59b4d8-fa7a-4c50-b130-8b4857359efa","Type":"ContainerDied","Data":"b6fb2721b520ebe1cede0be4cfc4189e99d8b75e5efbec478e75775987a6a914"} Oct 11 10:39:25.816892 master-2 kubenswrapper[4776]: I1011 10:39:25.816835 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="4f88b73b0d121e855641834122063be9" podUID="2dd82f838b5636582534da82a3996ea6" Oct 11 10:39:25.840897 master-2 kubenswrapper[4776]: I1011 10:39:25.840814 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="4f88b73b0d121e855641834122063be9" podUID="2dd82f838b5636582534da82a3996ea6" Oct 11 10:39:26.066986 master-2 kubenswrapper[4776]: I1011 10:39:26.066804 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f88b73b0d121e855641834122063be9" path="/var/lib/kubelet/pods/4f88b73b0d121e855641834122063be9/volumes" Oct 11 10:39:26.465194 master-2 kubenswrapper[4776]: I1011 10:39:26.465127 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:26.465194 master-2 kubenswrapper[4776]: I1011 10:39:26.465188 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:27.247765 master-2 kubenswrapper[4776]: I1011 10:39:27.247711 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:39:27.405582 master-2 kubenswrapper[4776]: I1011 10:39:27.405523 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kubelet-dir\") pod \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " Oct 11 10:39:27.406024 master-2 kubenswrapper[4776]: I1011 10:39:27.405758 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5c59b4d8-fa7a-4c50-b130-8b4857359efa" (UID: "5c59b4d8-fa7a-4c50-b130-8b4857359efa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:27.406138 master-2 kubenswrapper[4776]: I1011 10:39:27.405997 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kube-api-access\") pod \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " Oct 11 10:39:27.406268 master-2 kubenswrapper[4776]: I1011 10:39:27.406250 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-var-lock\") pod \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\" (UID: \"5c59b4d8-fa7a-4c50-b130-8b4857359efa\") " Oct 11 10:39:27.406560 master-2 kubenswrapper[4776]: I1011 10:39:27.406521 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-var-lock" (OuterVolumeSpecName: "var-lock") pod "5c59b4d8-fa7a-4c50-b130-8b4857359efa" (UID: "5c59b4d8-fa7a-4c50-b130-8b4857359efa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:27.406904 master-2 kubenswrapper[4776]: I1011 10:39:27.406881 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:27.407016 master-2 kubenswrapper[4776]: I1011 10:39:27.407002 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c59b4d8-fa7a-4c50-b130-8b4857359efa-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:27.410560 master-2 kubenswrapper[4776]: I1011 10:39:27.410509 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5c59b4d8-fa7a-4c50-b130-8b4857359efa" (UID: "5c59b4d8-fa7a-4c50-b130-8b4857359efa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:39:27.508391 master-2 kubenswrapper[4776]: I1011 10:39:27.508241 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c59b4d8-fa7a-4c50-b130-8b4857359efa-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:27.828061 master-2 kubenswrapper[4776]: I1011 10:39:27.827915 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-2" event={"ID":"5c59b4d8-fa7a-4c50-b130-8b4857359efa","Type":"ContainerDied","Data":"966a418bbbd7a4398464bc0352257041c54b7d3624ba848b9d0be21d900f1a4b"} Oct 11 10:39:27.828061 master-2 kubenswrapper[4776]: I1011 10:39:27.827985 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="966a418bbbd7a4398464bc0352257041c54b7d3624ba848b9d0be21d900f1a4b" Oct 11 10:39:27.828061 master-2 kubenswrapper[4776]: I1011 10:39:27.828011 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-2" Oct 11 10:39:29.193367 master-2 kubenswrapper[4776]: I1011 10:39:29.193325 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:29.194099 master-2 kubenswrapper[4776]: I1011 10:39:29.194017 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:30.023970 master-2 kubenswrapper[4776]: I1011 10:39:30.023869 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:39:30.024318 master-2 kubenswrapper[4776]: I1011 10:39:30.023969 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: I1011 10:39:30.265779 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:39:30.265861 master-2 kubenswrapper[4776]: I1011 10:39:30.265854 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:39:31.464585 master-2 kubenswrapper[4776]: I1011 10:39:31.464479 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:31.464585 master-2 kubenswrapper[4776]: I1011 10:39:31.464577 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:34.193561 master-2 kubenswrapper[4776]: I1011 10:39:34.193470 4776 patch_prober.go:28] interesting pod/apiserver-7845cf54d8-h5nlf container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Oct 11 10:39:34.194091 master-2 kubenswrapper[4776]: I1011 10:39:34.193614 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.65:8443: connect: connection refused" Oct 11 10:39:34.877425 master-2 kubenswrapper[4776]: I1011 10:39:34.877369 4776 generic.go:334] "Generic (PLEG): container finished" podID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerID="a1a9c7629f1fd873d3ec9d24b009ce28e04c1ae342e924bb98a2d69fd1fdcc5f" exitCode=0 Oct 11 10:39:34.877425 master-2 kubenswrapper[4776]: I1011 10:39:34.877413 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerDied","Data":"a1a9c7629f1fd873d3ec9d24b009ce28e04c1ae342e924bb98a2d69fd1fdcc5f"} Oct 11 10:39:35.185817 master-2 kubenswrapper[4776]: I1011 10:39:35.185769 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316179 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q8hn\" (UniqueName: \"kubernetes.io/projected/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-kube-api-access-5q8hn\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316312 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316335 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit-dir\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316361 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-trusted-ca-bundle\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316402 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-encryption-config\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316450 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-client\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316478 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-serving-cert\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316504 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-serving-ca\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316539 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-config\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316568 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-image-import-ca\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.316593 master-2 kubenswrapper[4776]: I1011 10:39:35.316591 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-node-pullsecrets\") pod \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\" (UID: \"8c500140-fe5c-4fa2-914b-bb1e0c5758ab\") " Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.316966 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.317517 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.318491 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit" (OuterVolumeSpecName: "audit") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.319307 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-config" (OuterVolumeSpecName: "config") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.319905 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.319998 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.320130 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-kube-api-access-5q8hn" (OuterVolumeSpecName: "kube-api-access-5q8hn") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "kube-api-access-5q8hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.320413 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:39:35.321008 master-2 kubenswrapper[4776]: I1011 10:39:35.320746 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:39:35.325292 master-2 kubenswrapper[4776]: I1011 10:39:35.325203 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:39:35.335178 master-2 kubenswrapper[4776]: I1011 10:39:35.329891 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "8c500140-fe5c-4fa2-914b-bb1e0c5758ab" (UID: "8c500140-fe5c-4fa2-914b-bb1e0c5758ab"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417608 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417651 4776 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-image-import-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417665 4776 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-node-pullsecrets\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417694 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q8hn\" (UniqueName: \"kubernetes.io/projected/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-kube-api-access-5q8hn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417703 4776 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417711 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.417664 master-2 kubenswrapper[4776]: I1011 10:39:35.417721 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.418127 master-2 kubenswrapper[4776]: I1011 10:39:35.417729 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.418127 master-2 kubenswrapper[4776]: I1011 10:39:35.417737 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.418127 master-2 kubenswrapper[4776]: I1011 10:39:35.417745 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.418127 master-2 kubenswrapper[4776]: I1011 10:39:35.417754 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c500140-fe5c-4fa2-914b-bb1e0c5758ab-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:39:35.885842 master-2 kubenswrapper[4776]: I1011 10:39:35.885505 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" event={"ID":"8c500140-fe5c-4fa2-914b-bb1e0c5758ab","Type":"ContainerDied","Data":"096d8e6920a041962b0dbcc41b2a283a99938e8e8c28669a5a9ec5f599e847be"} Oct 11 10:39:35.885842 master-2 kubenswrapper[4776]: I1011 10:39:35.885555 4776 scope.go:117] "RemoveContainer" containerID="33b8451dee3f8d5ed8e144b04e3c4757d199f647e9b246655c277be3cef812a5" Oct 11 10:39:35.885842 master-2 kubenswrapper[4776]: I1011 10:39:35.885640 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7845cf54d8-h5nlf" Oct 11 10:39:35.904062 master-2 kubenswrapper[4776]: I1011 10:39:35.904031 4776 scope.go:117] "RemoveContainer" containerID="a1a9c7629f1fd873d3ec9d24b009ce28e04c1ae342e924bb98a2d69fd1fdcc5f" Oct 11 10:39:35.926323 master-2 kubenswrapper[4776]: I1011 10:39:35.926298 4776 scope.go:117] "RemoveContainer" containerID="227b0ea6948a9655dda8b2fd87923ef92a7b65ccb09fd037cc6c580377f3d16c" Oct 11 10:39:35.968507 master-2 kubenswrapper[4776]: I1011 10:39:35.968433 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-h5nlf"] Oct 11 10:39:36.011544 master-2 kubenswrapper[4776]: I1011 10:39:36.011484 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-7845cf54d8-h5nlf"] Oct 11 10:39:36.068554 master-2 kubenswrapper[4776]: I1011 10:39:36.068474 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" path="/var/lib/kubelet/pods/8c500140-fe5c-4fa2-914b-bb1e0c5758ab/volumes" Oct 11 10:39:36.465289 master-2 kubenswrapper[4776]: I1011 10:39:36.465213 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:36.465960 master-2 kubenswrapper[4776]: I1011 10:39:36.465313 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:36.465960 master-2 kubenswrapper[4776]: I1011 10:39:36.465447 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:39:36.466548 master-2 kubenswrapper[4776]: I1011 10:39:36.466458 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:36.466727 master-2 kubenswrapper[4776]: I1011 10:39:36.466604 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:38.062643 master-2 kubenswrapper[4776]: I1011 10:39:38.062511 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:38.081390 master-2 kubenswrapper[4776]: I1011 10:39:38.081325 4776 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="e7a77e3c-dbf6-4cb7-b694-b6bfe84a86da" Oct 11 10:39:38.081390 master-2 kubenswrapper[4776]: I1011 10:39:38.081369 4776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="e7a77e3c-dbf6-4cb7-b694-b6bfe84a86da" Oct 11 10:39:38.161261 master-2 kubenswrapper[4776]: I1011 10:39:38.161189 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:39:38.240438 master-2 kubenswrapper[4776]: I1011 10:39:38.240344 4776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:38.257050 master-2 kubenswrapper[4776]: I1011 10:39:38.256930 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:39:38.362110 master-2 kubenswrapper[4776]: I1011 10:39:38.361772 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:39:38.386265 master-2 kubenswrapper[4776]: I1011 10:39:38.386210 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:38.416375 master-2 kubenswrapper[4776]: W1011 10:39:38.416320 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dd82f838b5636582534da82a3996ea6.slice/crio-3c995ec4b96c7edcbb05a935547fbc266fb8b8ebcc60f0a31d4c752102f99347 WatchSource:0}: Error finding container 3c995ec4b96c7edcbb05a935547fbc266fb8b8ebcc60f0a31d4c752102f99347: Status 404 returned error can't find the container with id 3c995ec4b96c7edcbb05a935547fbc266fb8b8ebcc60f0a31d4c752102f99347 Oct 11 10:39:38.911197 master-2 kubenswrapper[4776]: I1011 10:39:38.911156 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerStarted","Data":"f4e2e49582cba1c1c050448b4ca7740185e407bb7310d805fa88bb2ea911a4b1"} Oct 11 10:39:38.911197 master-2 kubenswrapper[4776]: I1011 10:39:38.911193 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerStarted","Data":"3c995ec4b96c7edcbb05a935547fbc266fb8b8ebcc60f0a31d4c752102f99347"} Oct 11 10:39:39.892213 master-2 kubenswrapper[4776]: I1011 10:39:39.892145 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-klwcv"] Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: E1011 10:39:39.892453 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="fix-audit-permissions" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892473 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="fix-audit-permissions" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: E1011 10:39:39.892494 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c59b4d8-fa7a-4c50-b130-8b4857359efa" containerName="installer" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892506 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c59b4d8-fa7a-4c50-b130-8b4857359efa" containerName="installer" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: E1011 10:39:39.892529 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver-check-endpoints" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892543 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver-check-endpoints" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: E1011 10:39:39.892567 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892583 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892785 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver-check-endpoints" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892814 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c59b4d8-fa7a-4c50-b130-8b4857359efa" containerName="installer" Oct 11 10:39:39.893016 master-2 kubenswrapper[4776]: I1011 10:39:39.892833 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c500140-fe5c-4fa2-914b-bb1e0c5758ab" containerName="openshift-apiserver" Oct 11 10:39:39.894389 master-2 kubenswrapper[4776]: I1011 10:39:39.894339 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:39.905325 master-2 kubenswrapper[4776]: I1011 10:39:39.905266 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:39:39.905807 master-2 kubenswrapper[4776]: I1011 10:39:39.905764 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:39:39.906332 master-2 kubenswrapper[4776]: I1011 10:39:39.906287 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-lntq9" Oct 11 10:39:39.906601 master-2 kubenswrapper[4776]: I1011 10:39:39.906561 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:39:39.907123 master-2 kubenswrapper[4776]: I1011 10:39:39.907057 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:39:39.907355 master-2 kubenswrapper[4776]: I1011 10:39:39.907274 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:39:39.907553 master-2 kubenswrapper[4776]: I1011 10:39:39.907469 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:39:39.907759 master-2 kubenswrapper[4776]: I1011 10:39:39.907731 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:39:39.907936 master-2 kubenswrapper[4776]: I1011 10:39:39.907897 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:39:39.908216 master-2 kubenswrapper[4776]: I1011 10:39:39.908173 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:39:39.916356 master-2 kubenswrapper[4776]: I1011 10:39:39.916308 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:39:40.023749 master-2 kubenswrapper[4776]: I1011 10:39:40.023650 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:39:40.023950 master-2 kubenswrapper[4776]: I1011 10:39:40.023753 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:39:40.089612 master-2 kubenswrapper[4776]: I1011 10:39:40.089465 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.089612 master-2 kubenswrapper[4776]: I1011 10:39:40.089521 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-client\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.089612 master-2 kubenswrapper[4776]: I1011 10:39:40.089546 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-encryption-config\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.089612 master-2 kubenswrapper[4776]: I1011 10:39:40.089569 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-trusted-ca-bundle\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.089612 master-2 kubenswrapper[4776]: I1011 10:39:40.089591 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit-dir\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.089612 master-2 kubenswrapper[4776]: I1011 10:39:40.089622 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-node-pullsecrets\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.090055 master-2 kubenswrapper[4776]: I1011 10:39:40.089649 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-serving-cert\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.090055 master-2 kubenswrapper[4776]: I1011 10:39:40.089695 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-serving-ca\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.090055 master-2 kubenswrapper[4776]: I1011 10:39:40.089715 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-config\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.090055 master-2 kubenswrapper[4776]: I1011 10:39:40.089733 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vljtf\" (UniqueName: \"kubernetes.io/projected/4125c617-d1f6-4f29-bae1-1165604b9cbd-kube-api-access-vljtf\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.090055 master-2 kubenswrapper[4776]: I1011 10:39:40.089748 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-image-import-ca\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.190669 master-2 kubenswrapper[4776]: I1011 10:39:40.190612 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-klwcv"] Oct 11 10:39:40.191182 master-2 kubenswrapper[4776]: I1011 10:39:40.191148 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-serving-ca\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191182 master-2 kubenswrapper[4776]: I1011 10:39:40.191179 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-config\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191275 master-2 kubenswrapper[4776]: I1011 10:39:40.191203 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vljtf\" (UniqueName: \"kubernetes.io/projected/4125c617-d1f6-4f29-bae1-1165604b9cbd-kube-api-access-vljtf\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191275 master-2 kubenswrapper[4776]: I1011 10:39:40.191220 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-image-import-ca\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191275 master-2 kubenswrapper[4776]: I1011 10:39:40.191257 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191275 master-2 kubenswrapper[4776]: I1011 10:39:40.191272 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-client\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191438 master-2 kubenswrapper[4776]: I1011 10:39:40.191294 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-encryption-config\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191438 master-2 kubenswrapper[4776]: I1011 10:39:40.191313 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-trusted-ca-bundle\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191438 master-2 kubenswrapper[4776]: I1011 10:39:40.191333 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit-dir\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191438 master-2 kubenswrapper[4776]: I1011 10:39:40.191362 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-node-pullsecrets\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191438 master-2 kubenswrapper[4776]: I1011 10:39:40.191386 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-serving-cert\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191919 master-2 kubenswrapper[4776]: I1011 10:39:40.191545 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit-dir\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.191919 master-2 kubenswrapper[4776]: I1011 10:39:40.191659 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-node-pullsecrets\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.192059 master-2 kubenswrapper[4776]: I1011 10:39:40.191992 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-config\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.193416 master-2 kubenswrapper[4776]: I1011 10:39:40.192363 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-serving-ca\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.197269 master-2 kubenswrapper[4776]: I1011 10:39:40.197232 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.197269 master-2 kubenswrapper[4776]: I1011 10:39:40.197266 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-serving-cert\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.197399 master-2 kubenswrapper[4776]: I1011 10:39:40.197374 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-trusted-ca-bundle\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.197438 master-2 kubenswrapper[4776]: I1011 10:39:40.197400 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-encryption-config\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.197472 master-2 kubenswrapper[4776]: I1011 10:39:40.197452 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-client\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.197540 master-2 kubenswrapper[4776]: I1011 10:39:40.197459 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-image-import-ca\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.486866 master-2 kubenswrapper[4776]: I1011 10:39:40.486787 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vljtf\" (UniqueName: \"kubernetes.io/projected/4125c617-d1f6-4f29-bae1-1165604b9cbd-kube-api-access-vljtf\") pod \"apiserver-69df5d46bc-klwcv\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.514733 master-2 kubenswrapper[4776]: I1011 10:39:40.514688 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:40.844547 master-2 kubenswrapper[4776]: E1011 10:39:40.844468 4776 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-ap7ej74ueigk4: secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:39:40.844547 master-2 kubenswrapper[4776]: E1011 10:39:40.844552 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle podName:5473628e-94c8-4706-bb03-ff4836debe5f nodeName:}" failed. No retries permitted until 2025-10-11 10:41:42.844536232 +0000 UTC m=+937.628962941 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle") pod "metrics-server-65d86dff78-crzgp" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f") : secret "metrics-server-ap7ej74ueigk4" not found Oct 11 10:39:40.927186 master-2 kubenswrapper[4776]: I1011 10:39:40.927124 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerStarted","Data":"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752"} Oct 11 10:39:40.927186 master-2 kubenswrapper[4776]: I1011 10:39:40.927172 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerStarted","Data":"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163"} Oct 11 10:39:40.927755 master-2 kubenswrapper[4776]: I1011 10:39:40.927204 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerStarted","Data":"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4"} Oct 11 10:39:41.288032 master-2 kubenswrapper[4776]: I1011 10:39:41.287940 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-klwcv"] Oct 11 10:39:41.416597 master-2 kubenswrapper[4776]: W1011 10:39:41.416533 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4125c617_d1f6_4f29_bae1_1165604b9cbd.slice/crio-8da59f9f35574c5f3bacdb804091911baf908469aad4410d43906f030b48b831 WatchSource:0}: Error finding container 8da59f9f35574c5f3bacdb804091911baf908469aad4410d43906f030b48b831: Status 404 returned error can't find the container with id 8da59f9f35574c5f3bacdb804091911baf908469aad4410d43906f030b48b831 Oct 11 10:39:41.464507 master-2 kubenswrapper[4776]: I1011 10:39:41.464450 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:41.464661 master-2 kubenswrapper[4776]: I1011 10:39:41.464504 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:41.937003 master-2 kubenswrapper[4776]: I1011 10:39:41.936940 4776 generic.go:334] "Generic (PLEG): container finished" podID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerID="eb57f483b1fb4288bd615f1e2349b2230b6272e2d1ba16c1f8dcb73ce4999885" exitCode=0 Oct 11 10:39:41.937725 master-2 kubenswrapper[4776]: I1011 10:39:41.937057 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerDied","Data":"eb57f483b1fb4288bd615f1e2349b2230b6272e2d1ba16c1f8dcb73ce4999885"} Oct 11 10:39:41.937725 master-2 kubenswrapper[4776]: I1011 10:39:41.937144 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerStarted","Data":"8da59f9f35574c5f3bacdb804091911baf908469aad4410d43906f030b48b831"} Oct 11 10:39:42.947640 master-2 kubenswrapper[4776]: I1011 10:39:42.947591 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerStarted","Data":"e4993a00e7728dc437a4c6094596c369ce11c26b8ae277d77d9133c67e1933b9"} Oct 11 10:39:42.948476 master-2 kubenswrapper[4776]: I1011 10:39:42.948454 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerStarted","Data":"f9151fc06dbd01664b47f868a0c43e1a9e6b83f5d73cdb1bae7462ef40f38776"} Oct 11 10:39:43.083896 master-2 kubenswrapper[4776]: I1011 10:39:43.083819 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podStartSLOduration=5.083801432 podStartE2EDuration="5.083801432s" podCreationTimestamp="2025-10-11 10:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:42.469099786 +0000 UTC m=+817.253526505" watchObservedRunningTime="2025-10-11 10:39:43.083801432 +0000 UTC m=+817.868228141" Oct 11 10:39:43.087631 master-2 kubenswrapper[4776]: I1011 10:39:43.086245 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podStartSLOduration=52.086237109 podStartE2EDuration="52.086237109s" podCreationTimestamp="2025-10-11 10:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:39:43.083264638 +0000 UTC m=+817.867691357" watchObservedRunningTime="2025-10-11 10:39:43.086237109 +0000 UTC m=+817.870663818" Oct 11 10:39:45.515485 master-2 kubenswrapper[4776]: I1011 10:39:45.515424 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:45.516101 master-2 kubenswrapper[4776]: I1011 10:39:45.515975 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:45.522944 master-2 kubenswrapper[4776]: I1011 10:39:45.522908 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:45.968573 master-2 kubenswrapper[4776]: I1011 10:39:45.968521 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:39:46.464815 master-2 kubenswrapper[4776]: I1011 10:39:46.464491 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:46.464815 master-2 kubenswrapper[4776]: I1011 10:39:46.464621 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:48.387442 master-2 kubenswrapper[4776]: I1011 10:39:48.387155 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:48.388210 master-2 kubenswrapper[4776]: I1011 10:39:48.387474 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:48.388210 master-2 kubenswrapper[4776]: I1011 10:39:48.387504 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:48.388210 master-2 kubenswrapper[4776]: I1011 10:39:48.387522 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:48.388210 master-2 kubenswrapper[4776]: I1011 10:39:48.387592 4776 patch_prober.go:28] interesting pod/kube-controller-manager-master-2 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:48.388210 master-2 kubenswrapper[4776]: I1011 10:39:48.387650 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:48.391925 master-2 kubenswrapper[4776]: I1011 10:39:48.391893 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:49.157577 master-2 kubenswrapper[4776]: I1011 10:39:49.157496 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" Oct 11 10:39:50.023627 master-2 kubenswrapper[4776]: I1011 10:39:50.023569 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:39:50.024164 master-2 kubenswrapper[4776]: I1011 10:39:50.023630 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: I1011 10:39:50.267287 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:39:50.267361 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:39:50.267829 master-2 kubenswrapper[4776]: I1011 10:39:50.267373 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:39:51.464815 master-2 kubenswrapper[4776]: I1011 10:39:51.464719 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:51.465626 master-2 kubenswrapper[4776]: I1011 10:39:51.464837 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:56.464096 master-2 kubenswrapper[4776]: I1011 10:39:56.464034 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:56.464891 master-2 kubenswrapper[4776]: I1011 10:39:56.464107 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:58.387551 master-2 kubenswrapper[4776]: I1011 10:39:58.387484 4776 patch_prober.go:28] interesting pod/kube-controller-manager-master-2 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:39:58.388173 master-2 kubenswrapper[4776]: I1011 10:39:58.387555 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:39:58.390767 master-2 kubenswrapper[4776]: I1011 10:39:58.390731 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:39:59.486364 master-2 kubenswrapper[4776]: I1011 10:39:59.486269 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-2"] Oct 11 10:39:59.487458 master-2 kubenswrapper[4776]: I1011 10:39:59.487251 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.489636 master-2 kubenswrapper[4776]: I1011 10:39:59.489572 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-js756" Oct 11 10:39:59.552135 master-2 kubenswrapper[4776]: I1011 10:39:59.552019 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4114c1be-d3d9-438f-b215-619b0aa3e114-kubelet-dir\") pod \"revision-pruner-6-master-2\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.552135 master-2 kubenswrapper[4776]: I1011 10:39:59.552089 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4114c1be-d3d9-438f-b215-619b0aa3e114-kube-api-access\") pod \"revision-pruner-6-master-2\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.649827 master-2 kubenswrapper[4776]: I1011 10:39:59.649763 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-2"] Oct 11 10:39:59.653330 master-2 kubenswrapper[4776]: I1011 10:39:59.653269 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4114c1be-d3d9-438f-b215-619b0aa3e114-kube-api-access\") pod \"revision-pruner-6-master-2\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.653448 master-2 kubenswrapper[4776]: I1011 10:39:59.653414 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4114c1be-d3d9-438f-b215-619b0aa3e114-kubelet-dir\") pod \"revision-pruner-6-master-2\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.653548 master-2 kubenswrapper[4776]: I1011 10:39:59.653492 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4114c1be-d3d9-438f-b215-619b0aa3e114-kubelet-dir\") pod \"revision-pruner-6-master-2\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.756404 master-2 kubenswrapper[4776]: I1011 10:39:59.756277 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4114c1be-d3d9-438f-b215-619b0aa3e114-kube-api-access\") pod \"revision-pruner-6-master-2\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:39:59.802924 master-2 kubenswrapper[4776]: I1011 10:39:59.802872 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:40:00.023870 master-2 kubenswrapper[4776]: I1011 10:40:00.023685 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:40:00.023870 master-2 kubenswrapper[4776]: I1011 10:40:00.023735 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:40:00.204099 master-2 kubenswrapper[4776]: I1011 10:40:00.204065 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-2"] Oct 11 10:40:00.210494 master-2 kubenswrapper[4776]: W1011 10:40:00.210440 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4114c1be_d3d9_438f_b215_619b0aa3e114.slice/crio-fbcb84dc8dfb875ffbfdc915ddf23aa5c79f58b53751526f2cda898623241b23 WatchSource:0}: Error finding container fbcb84dc8dfb875ffbfdc915ddf23aa5c79f58b53751526f2cda898623241b23: Status 404 returned error can't find the container with id fbcb84dc8dfb875ffbfdc915ddf23aa5c79f58b53751526f2cda898623241b23 Oct 11 10:40:01.050215 master-2 kubenswrapper[4776]: I1011 10:40:01.050138 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-2" event={"ID":"4114c1be-d3d9-438f-b215-619b0aa3e114","Type":"ContainerStarted","Data":"2a8171298231c0b71d807439369128048e0314ae6d16837ac065e1139fb9e09c"} Oct 11 10:40:01.050215 master-2 kubenswrapper[4776]: I1011 10:40:01.050193 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-2" event={"ID":"4114c1be-d3d9-438f-b215-619b0aa3e114","Type":"ContainerStarted","Data":"fbcb84dc8dfb875ffbfdc915ddf23aa5c79f58b53751526f2cda898623241b23"} Oct 11 10:40:01.080237 master-2 kubenswrapper[4776]: I1011 10:40:01.080151 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-master-2" podStartSLOduration=2.080130726 podStartE2EDuration="2.080130726s" podCreationTimestamp="2025-10-11 10:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:40:01.0766124 +0000 UTC m=+835.861039109" watchObservedRunningTime="2025-10-11 10:40:01.080130726 +0000 UTC m=+835.864557435" Oct 11 10:40:01.464819 master-2 kubenswrapper[4776]: I1011 10:40:01.464739 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:01.464819 master-2 kubenswrapper[4776]: I1011 10:40:01.464801 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:02.068060 master-2 kubenswrapper[4776]: I1011 10:40:02.067767 4776 generic.go:334] "Generic (PLEG): container finished" podID="4114c1be-d3d9-438f-b215-619b0aa3e114" containerID="2a8171298231c0b71d807439369128048e0314ae6d16837ac065e1139fb9e09c" exitCode=0 Oct 11 10:40:02.070480 master-2 kubenswrapper[4776]: I1011 10:40:02.070433 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-2" event={"ID":"4114c1be-d3d9-438f-b215-619b0aa3e114","Type":"ContainerDied","Data":"2a8171298231c0b71d807439369128048e0314ae6d16837ac065e1139fb9e09c"} Oct 11 10:40:03.369998 master-2 kubenswrapper[4776]: I1011 10:40:03.369931 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:40:03.521514 master-2 kubenswrapper[4776]: I1011 10:40:03.521466 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4114c1be-d3d9-438f-b215-619b0aa3e114-kube-api-access\") pod \"4114c1be-d3d9-438f-b215-619b0aa3e114\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " Oct 11 10:40:03.521771 master-2 kubenswrapper[4776]: I1011 10:40:03.521530 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4114c1be-d3d9-438f-b215-619b0aa3e114-kubelet-dir\") pod \"4114c1be-d3d9-438f-b215-619b0aa3e114\" (UID: \"4114c1be-d3d9-438f-b215-619b0aa3e114\") " Oct 11 10:40:03.521771 master-2 kubenswrapper[4776]: I1011 10:40:03.521740 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4114c1be-d3d9-438f-b215-619b0aa3e114-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4114c1be-d3d9-438f-b215-619b0aa3e114" (UID: "4114c1be-d3d9-438f-b215-619b0aa3e114"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:40:03.521985 master-2 kubenswrapper[4776]: I1011 10:40:03.521965 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4114c1be-d3d9-438f-b215-619b0aa3e114-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:03.524952 master-2 kubenswrapper[4776]: I1011 10:40:03.524923 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4114c1be-d3d9-438f-b215-619b0aa3e114-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4114c1be-d3d9-438f-b215-619b0aa3e114" (UID: "4114c1be-d3d9-438f-b215-619b0aa3e114"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:40:03.623253 master-2 kubenswrapper[4776]: I1011 10:40:03.623118 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4114c1be-d3d9-438f-b215-619b0aa3e114-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:04.081240 master-2 kubenswrapper[4776]: I1011 10:40:04.081183 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-2" event={"ID":"4114c1be-d3d9-438f-b215-619b0aa3e114","Type":"ContainerDied","Data":"fbcb84dc8dfb875ffbfdc915ddf23aa5c79f58b53751526f2cda898623241b23"} Oct 11 10:40:04.081240 master-2 kubenswrapper[4776]: I1011 10:40:04.081242 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbcb84dc8dfb875ffbfdc915ddf23aa5c79f58b53751526f2cda898623241b23" Oct 11 10:40:04.081457 master-2 kubenswrapper[4776]: I1011 10:40:04.081300 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-2" Oct 11 10:40:06.464500 master-2 kubenswrapper[4776]: I1011 10:40:06.464414 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:06.465034 master-2 kubenswrapper[4776]: I1011 10:40:06.464533 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:08.386689 master-2 kubenswrapper[4776]: I1011 10:40:08.386612 4776 patch_prober.go:28] interesting pod/kube-controller-manager-master-2 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:08.387172 master-2 kubenswrapper[4776]: I1011 10:40:08.386716 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:08.387172 master-2 kubenswrapper[4776]: I1011 10:40:08.386757 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:40:08.387400 master-2 kubenswrapper[4776]: I1011 10:40:08.387379 4776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"f4e2e49582cba1c1c050448b4ca7740185e407bb7310d805fa88bb2ea911a4b1"} pod="openshift-kube-controller-manager/kube-controller-manager-master-2" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Oct 11 10:40:08.387516 master-2 kubenswrapper[4776]: I1011 10:40:08.387493 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" containerID="cri-o://f4e2e49582cba1c1c050448b4ca7740185e407bb7310d805fa88bb2ea911a4b1" gracePeriod=30 Oct 11 10:40:10.024003 master-2 kubenswrapper[4776]: I1011 10:40:10.023929 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:40:10.025037 master-2 kubenswrapper[4776]: I1011 10:40:10.024016 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: I1011 10:40:10.265204 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:40:10.265310 master-2 kubenswrapper[4776]: I1011 10:40:10.265302 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:10.265951 master-2 kubenswrapper[4776]: I1011 10:40:10.265388 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:40:11.464899 master-2 kubenswrapper[4776]: I1011 10:40:11.464820 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:11.465385 master-2 kubenswrapper[4776]: I1011 10:40:11.464934 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:15.084052 master-2 kubenswrapper[4776]: I1011 10:40:15.083969 4776 scope.go:117] "RemoveContainer" containerID="3bed6e75ec56e4b27551229ac1cfed015f19cbbd37a51de7899f2a409f7f3107" Oct 11 10:40:15.097491 master-2 kubenswrapper[4776]: I1011 10:40:15.097448 4776 scope.go:117] "RemoveContainer" containerID="0f23b40fdead0283107ab65c161704aa7acddce7d9fe04f98e8ea9ab7f03a4cd" Oct 11 10:40:15.120203 master-2 kubenswrapper[4776]: I1011 10:40:15.120170 4776 scope.go:117] "RemoveContainer" containerID="10893b6ff26cfe0860be9681aea4e014407f210edeb0807d7d50c1f9edb2d910" Oct 11 10:40:15.158586 master-2 kubenswrapper[4776]: I1011 10:40:15.158461 4776 scope.go:117] "RemoveContainer" containerID="65701df136730297d07e924a9003107719afae6bc3e70126f7680b788afdcc01" Oct 11 10:40:15.183699 master-2 kubenswrapper[4776]: I1011 10:40:15.183625 4776 scope.go:117] "RemoveContainer" containerID="c57c60875e1be574e46fe440bf3b2752ffb605bb2f328363af8d1f914310116f" Oct 11 10:40:15.226725 master-2 kubenswrapper[4776]: I1011 10:40:15.226660 4776 scope.go:117] "RemoveContainer" containerID="b1b4a56fa0152f300c6a99db97775492cfcdce4712ae78b30e2ac340b25efd8c" Oct 11 10:40:15.247835 master-2 kubenswrapper[4776]: I1011 10:40:15.247695 4776 scope.go:117] "RemoveContainer" containerID="1c66c420f42b218cae752c5f11d7d84132ff9087b3b755852c6f533f1acaeece" Oct 11 10:40:15.264279 master-2 kubenswrapper[4776]: I1011 10:40:15.264217 4776 scope.go:117] "RemoveContainer" containerID="f3c0c4b66c129c923e0c2ad907f82ab3a83141f8e7f3805af522d3e693204962" Oct 11 10:40:16.464888 master-2 kubenswrapper[4776]: I1011 10:40:16.464769 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:16.464888 master-2 kubenswrapper[4776]: I1011 10:40:16.464874 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:20.024174 master-2 kubenswrapper[4776]: I1011 10:40:20.024065 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:40:20.024174 master-2 kubenswrapper[4776]: I1011 10:40:20.024143 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:40:21.475727 master-2 kubenswrapper[4776]: I1011 10:40:21.475657 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:40:30.023863 master-2 kubenswrapper[4776]: I1011 10:40:30.023771 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:40:30.023863 master-2 kubenswrapper[4776]: I1011 10:40:30.023840 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: I1011 10:40:30.263951 4776 patch_prober.go:28] interesting pod/metrics-server-65d86dff78-crzgp container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [+]metric-storage-ready ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [+]metric-informer-sync ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [+]metadata-informer-sync ok Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:40:30.264026 master-2 kubenswrapper[4776]: I1011 10:40:30.264004 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:40:39.323867 master-2 kubenswrapper[4776]: I1011 10:40:39.323755 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/0.log" Oct 11 10:40:39.324505 master-2 kubenswrapper[4776]: I1011 10:40:39.323892 4776 generic.go:334] "Generic (PLEG): container finished" podID="2dd82f838b5636582534da82a3996ea6" containerID="f4e2e49582cba1c1c050448b4ca7740185e407bb7310d805fa88bb2ea911a4b1" exitCode=137 Oct 11 10:40:39.324505 master-2 kubenswrapper[4776]: I1011 10:40:39.323941 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerDied","Data":"f4e2e49582cba1c1c050448b4ca7740185e407bb7310d805fa88bb2ea911a4b1"} Oct 11 10:40:39.324505 master-2 kubenswrapper[4776]: I1011 10:40:39.323978 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"2dd82f838b5636582534da82a3996ea6","Type":"ContainerStarted","Data":"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec"} Oct 11 10:40:40.023410 master-2 kubenswrapper[4776]: I1011 10:40:40.023282 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:40:40.023410 master-2 kubenswrapper[4776]: I1011 10:40:40.023346 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:40:41.464126 master-2 kubenswrapper[4776]: I1011 10:40:41.464062 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:41.464795 master-2 kubenswrapper[4776]: I1011 10:40:41.464127 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:46.464254 master-2 kubenswrapper[4776]: I1011 10:40:46.464181 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:46.464254 master-2 kubenswrapper[4776]: I1011 10:40:46.464256 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:48.387700 master-2 kubenswrapper[4776]: I1011 10:40:48.387590 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:40:48.387700 master-2 kubenswrapper[4776]: I1011 10:40:48.387595 4776 patch_prober.go:28] interesting pod/kube-controller-manager-master-2 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:48.387700 master-2 kubenswrapper[4776]: I1011 10:40:48.387669 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:48.388882 master-2 kubenswrapper[4776]: I1011 10:40:48.388308 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:40:49.388656 master-2 kubenswrapper[4776]: I1011 10:40:49.388561 4776 generic.go:334] "Generic (PLEG): container finished" podID="5473628e-94c8-4706-bb03-ff4836debe5f" containerID="bc423808a1318a501a04a81a0b62715e5af3476c9da3fb5de99b8aa1ff2380a0" exitCode=0 Oct 11 10:40:49.388656 master-2 kubenswrapper[4776]: I1011 10:40:49.388620 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" event={"ID":"5473628e-94c8-4706-bb03-ff4836debe5f","Type":"ContainerDied","Data":"bc423808a1318a501a04a81a0b62715e5af3476c9da3fb5de99b8aa1ff2380a0"} Oct 11 10:40:49.388656 master-2 kubenswrapper[4776]: I1011 10:40:49.388687 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" event={"ID":"5473628e-94c8-4706-bb03-ff4836debe5f","Type":"ContainerDied","Data":"8271b5e67a2ec2732b55bb993f1b0ac7f455b4b07969cedd192379e7f4289e39"} Oct 11 10:40:49.388656 master-2 kubenswrapper[4776]: I1011 10:40:49.388702 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8271b5e67a2ec2732b55bb993f1b0ac7f455b4b07969cedd192379e7f4289e39" Oct 11 10:40:49.390831 master-2 kubenswrapper[4776]: I1011 10:40:49.390775 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:40:49.556207 master-2 kubenswrapper[4776]: I1011 10:40:49.556116 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.556207 master-2 kubenswrapper[4776]: I1011 10:40:49.556191 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-server-tls\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.556207 master-2 kubenswrapper[4776]: I1011 10:40:49.556229 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-configmap-kubelet-serving-ca-bundle\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.556712 master-2 kubenswrapper[4776]: I1011 10:40:49.556294 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-metrics-server-audit-profiles\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.556712 master-2 kubenswrapper[4776]: I1011 10:40:49.556360 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2vd6\" (UniqueName: \"kubernetes.io/projected/5473628e-94c8-4706-bb03-ff4836debe5f-kube-api-access-q2vd6\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.556712 master-2 kubenswrapper[4776]: I1011 10:40:49.556409 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5473628e-94c8-4706-bb03-ff4836debe5f-audit-log\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.556712 master-2 kubenswrapper[4776]: I1011 10:40:49.556474 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-client-certs\") pod \"5473628e-94c8-4706-bb03-ff4836debe5f\" (UID: \"5473628e-94c8-4706-bb03-ff4836debe5f\") " Oct 11 10:40:49.557120 master-2 kubenswrapper[4776]: I1011 10:40:49.557052 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:40:49.557824 master-2 kubenswrapper[4776]: I1011 10:40:49.557779 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5473628e-94c8-4706-bb03-ff4836debe5f-audit-log" (OuterVolumeSpecName: "audit-log") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:40:49.558117 master-2 kubenswrapper[4776]: I1011 10:40:49.558046 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:40:49.560798 master-2 kubenswrapper[4776]: I1011 10:40:49.560732 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:40:49.560945 master-2 kubenswrapper[4776]: I1011 10:40:49.560884 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:40:49.561158 master-2 kubenswrapper[4776]: I1011 10:40:49.561117 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5473628e-94c8-4706-bb03-ff4836debe5f-kube-api-access-q2vd6" (OuterVolumeSpecName: "kube-api-access-q2vd6") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "kube-api-access-q2vd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:40:49.561544 master-2 kubenswrapper[4776]: I1011 10:40:49.561505 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "5473628e-94c8-4706-bb03-ff4836debe5f" (UID: "5473628e-94c8-4706-bb03-ff4836debe5f"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:40:49.658712 master-2 kubenswrapper[4776]: I1011 10:40:49.658595 4776 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-metrics-server-audit-profiles\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:49.658712 master-2 kubenswrapper[4776]: I1011 10:40:49.658657 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2vd6\" (UniqueName: \"kubernetes.io/projected/5473628e-94c8-4706-bb03-ff4836debe5f-kube-api-access-q2vd6\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:49.658712 master-2 kubenswrapper[4776]: I1011 10:40:49.658686 4776 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5473628e-94c8-4706-bb03-ff4836debe5f-audit-log\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:49.659072 master-2 kubenswrapper[4776]: I1011 10:40:49.658727 4776 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-client-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:49.659072 master-2 kubenswrapper[4776]: I1011 10:40:49.658745 4776 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-client-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:49.659072 master-2 kubenswrapper[4776]: I1011 10:40:49.658755 4776 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5473628e-94c8-4706-bb03-ff4836debe5f-secret-metrics-server-tls\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:49.659072 master-2 kubenswrapper[4776]: I1011 10:40:49.658767 4776 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5473628e-94c8-4706-bb03-ff4836debe5f-configmap-kubelet-serving-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:40:50.024514 master-2 kubenswrapper[4776]: I1011 10:40:50.024340 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:40:50.024514 master-2 kubenswrapper[4776]: I1011 10:40:50.024470 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:40:50.395936 master-2 kubenswrapper[4776]: I1011 10:40:50.395709 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65d86dff78-crzgp" Oct 11 10:40:50.446921 master-2 kubenswrapper[4776]: I1011 10:40:50.446864 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-crzgp"] Oct 11 10:40:50.450860 master-2 kubenswrapper[4776]: I1011 10:40:50.450807 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-65d86dff78-crzgp"] Oct 11 10:40:51.464487 master-2 kubenswrapper[4776]: I1011 10:40:51.464372 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:51.465782 master-2 kubenswrapper[4776]: I1011 10:40:51.464476 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:51.465782 master-2 kubenswrapper[4776]: I1011 10:40:51.464722 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:40:51.465980 master-2 kubenswrapper[4776]: I1011 10:40:51.465868 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:51.465980 master-2 kubenswrapper[4776]: I1011 10:40:51.465920 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:52.076459 master-2 kubenswrapper[4776]: I1011 10:40:52.076352 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" path="/var/lib/kubelet/pods/5473628e-94c8-4706-bb03-ff4836debe5f/volumes" Oct 11 10:40:56.464586 master-2 kubenswrapper[4776]: I1011 10:40:56.464508 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:56.465301 master-2 kubenswrapper[4776]: I1011 10:40:56.464587 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:40:58.388175 master-2 kubenswrapper[4776]: I1011 10:40:58.388043 4776 patch_prober.go:28] interesting pod/kube-controller-manager-master-2 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:40:58.388175 master-2 kubenswrapper[4776]: I1011 10:40:58.388158 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:00.023504 master-2 kubenswrapper[4776]: I1011 10:41:00.023435 4776 patch_prober.go:28] interesting pod/console-76f8bc4746-5jp5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" start-of-body= Oct 11 10:41:00.024025 master-2 kubenswrapper[4776]: I1011 10:41:00.023505 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.79:8443/health\": dial tcp 10.128.0.79:8443: connect: connection refused" Oct 11 10:41:01.464412 master-2 kubenswrapper[4776]: I1011 10:41:01.464365 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:01.464973 master-2 kubenswrapper[4776]: I1011 10:41:01.464432 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:01.565329 master-2 kubenswrapper[4776]: I1011 10:41:01.565228 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-6-master-2"] Oct 11 10:41:01.565731 master-2 kubenswrapper[4776]: E1011 10:41:01.565651 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" Oct 11 10:41:01.565778 master-2 kubenswrapper[4776]: I1011 10:41:01.565728 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" Oct 11 10:41:01.565778 master-2 kubenswrapper[4776]: E1011 10:41:01.565758 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4114c1be-d3d9-438f-b215-619b0aa3e114" containerName="pruner" Oct 11 10:41:01.565778 master-2 kubenswrapper[4776]: I1011 10:41:01.565773 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4114c1be-d3d9-438f-b215-619b0aa3e114" containerName="pruner" Oct 11 10:41:01.566039 master-2 kubenswrapper[4776]: I1011 10:41:01.566005 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4114c1be-d3d9-438f-b215-619b0aa3e114" containerName="pruner" Oct 11 10:41:01.566079 master-2 kubenswrapper[4776]: I1011 10:41:01.566051 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="5473628e-94c8-4706-bb03-ff4836debe5f" containerName="metrics-server" Oct 11 10:41:01.566902 master-2 kubenswrapper[4776]: I1011 10:41:01.566866 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.570603 master-2 kubenswrapper[4776]: I1011 10:41:01.570518 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-djvlq" Oct 11 10:41:01.579278 master-2 kubenswrapper[4776]: I1011 10:41:01.579213 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-2"] Oct 11 10:41:01.747859 master-2 kubenswrapper[4776]: I1011 10:41:01.747641 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.747859 master-2 kubenswrapper[4776]: I1011 10:41:01.747748 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kube-api-access\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.748206 master-2 kubenswrapper[4776]: I1011 10:41:01.748097 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-var-lock\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.850729 master-2 kubenswrapper[4776]: I1011 10:41:01.850626 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.851172 master-2 kubenswrapper[4776]: I1011 10:41:01.850790 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kube-api-access\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.851172 master-2 kubenswrapper[4776]: I1011 10:41:01.850794 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.851172 master-2 kubenswrapper[4776]: I1011 10:41:01.850962 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-var-lock\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.851172 master-2 kubenswrapper[4776]: I1011 10:41:01.851114 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-var-lock\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:01.972913 master-2 kubenswrapper[4776]: I1011 10:41:01.972776 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kube-api-access\") pod \"installer-6-master-2\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:02.186194 master-2 kubenswrapper[4776]: I1011 10:41:02.186118 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:02.611136 master-2 kubenswrapper[4776]: I1011 10:41:02.611091 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-2"] Oct 11 10:41:03.490035 master-2 kubenswrapper[4776]: I1011 10:41:03.489971 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-2" event={"ID":"f5c9d0dc-adaa-427d-9416-8b25d43673d0","Type":"ContainerStarted","Data":"9e3b264b36af8fb8203eaffb48028b74bc6997d9d895e272c171cb9caab5664f"} Oct 11 10:41:03.490035 master-2 kubenswrapper[4776]: I1011 10:41:03.490027 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-2" event={"ID":"f5c9d0dc-adaa-427d-9416-8b25d43673d0","Type":"ContainerStarted","Data":"8fbabddd40d44946c170e869c1c618cdec75e9cb6e63aa5167033a997e2748d9"} Oct 11 10:41:03.608818 master-2 kubenswrapper[4776]: I1011 10:41:03.608745 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-6-master-2" podStartSLOduration=2.608727266 podStartE2EDuration="2.608727266s" podCreationTimestamp="2025-10-11 10:41:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:41:03.608293634 +0000 UTC m=+898.392720363" watchObservedRunningTime="2025-10-11 10:41:03.608727266 +0000 UTC m=+898.393153975" Oct 11 10:41:06.464882 master-2 kubenswrapper[4776]: I1011 10:41:06.464811 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:06.465482 master-2 kubenswrapper[4776]: I1011 10:41:06.464906 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:08.387710 master-2 kubenswrapper[4776]: I1011 10:41:08.387501 4776 patch_prober.go:28] interesting pod/kube-controller-manager-master-2 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:08.390217 master-2 kubenswrapper[4776]: I1011 10:41:08.387709 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:08.390217 master-2 kubenswrapper[4776]: I1011 10:41:08.387815 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:08.390217 master-2 kubenswrapper[4776]: I1011 10:41:08.389223 4776 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec"} pod="openshift-kube-controller-manager/kube-controller-manager-master-2" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Oct 11 10:41:08.390217 master-2 kubenswrapper[4776]: I1011 10:41:08.389486 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" containerID="cri-o://3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" gracePeriod=30 Oct 11 10:41:10.028487 master-2 kubenswrapper[4776]: I1011 10:41:10.028432 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:41:10.036497 master-2 kubenswrapper[4776]: I1011 10:41:10.036439 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:41:11.464910 master-2 kubenswrapper[4776]: I1011 10:41:11.464811 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:11.464910 master-2 kubenswrapper[4776]: I1011 10:41:11.464897 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:15.335659 master-2 kubenswrapper[4776]: I1011 10:41:15.335588 4776 scope.go:117] "RemoveContainer" containerID="bc423808a1318a501a04a81a0b62715e5af3476c9da3fb5de99b8aa1ff2380a0" Oct 11 10:41:16.465267 master-2 kubenswrapper[4776]: I1011 10:41:16.465167 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:16.466106 master-2 kubenswrapper[4776]: I1011 10:41:16.465277 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:21.107714 master-2 kubenswrapper[4776]: I1011 10:41:21.107640 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76f8bc4746-5jp5k"] Oct 11 10:41:21.464923 master-2 kubenswrapper[4776]: I1011 10:41:21.464847 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:21.464923 master-2 kubenswrapper[4776]: I1011 10:41:21.464912 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:26.464599 master-2 kubenswrapper[4776]: I1011 10:41:26.464520 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:26.465356 master-2 kubenswrapper[4776]: I1011 10:41:26.464626 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:31.529865 master-2 kubenswrapper[4776]: I1011 10:41:31.529779 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" Oct 11 10:41:35.675005 master-2 kubenswrapper[4776]: I1011 10:41:35.674913 4776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:41:35.676004 master-2 kubenswrapper[4776]: I1011 10:41:35.675970 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:41:35.676228 master-2 kubenswrapper[4776]: E1011 10:41:35.676198 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-recovery-controller" Oct 11 10:41:35.676228 master-2 kubenswrapper[4776]: I1011 10:41:35.676216 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-recovery-controller" Oct 11 10:41:35.676228 master-2 kubenswrapper[4776]: E1011 10:41:35.676230 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-cert-syncer" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676236 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-cert-syncer" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: E1011 10:41:35.676246 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd82f838b5636582534da82a3996ea6" containerName="cluster-policy-controller" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676252 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd82f838b5636582534da82a3996ea6" containerName="cluster-policy-controller" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: E1011 10:41:35.676262 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676268 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676392 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676404 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676417 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-recovery-controller" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676426 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd82f838b5636582534da82a3996ea6" containerName="cluster-policy-controller" Oct 11 10:41:35.676435 master-2 kubenswrapper[4776]: I1011 10:41:35.676436 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-cert-syncer" Oct 11 10:41:35.676957 master-2 kubenswrapper[4776]: E1011 10:41:35.676532 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:35.676957 master-2 kubenswrapper[4776]: I1011 10:41:35.676541 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:35.859160 master-2 kubenswrapper[4776]: I1011 10:41:35.859080 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e2a316e4240b2f9bcd91a14c93331da1-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"e2a316e4240b2f9bcd91a14c93331da1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:35.859428 master-2 kubenswrapper[4776]: I1011 10:41:35.859262 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e2a316e4240b2f9bcd91a14c93331da1-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"e2a316e4240b2f9bcd91a14c93331da1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:35.960595 master-2 kubenswrapper[4776]: I1011 10:41:35.960459 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e2a316e4240b2f9bcd91a14c93331da1-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"e2a316e4240b2f9bcd91a14c93331da1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:35.960595 master-2 kubenswrapper[4776]: I1011 10:41:35.960562 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e2a316e4240b2f9bcd91a14c93331da1-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"e2a316e4240b2f9bcd91a14c93331da1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:35.960595 master-2 kubenswrapper[4776]: I1011 10:41:35.960591 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e2a316e4240b2f9bcd91a14c93331da1-cert-dir\") pod \"kube-controller-manager-master-2\" (UID: \"e2a316e4240b2f9bcd91a14c93331da1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:35.960958 master-2 kubenswrapper[4776]: I1011 10:41:35.960564 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e2a316e4240b2f9bcd91a14c93331da1-resource-dir\") pod \"kube-controller-manager-master-2\" (UID: \"e2a316e4240b2f9bcd91a14c93331da1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:36.065858 master-2 kubenswrapper[4776]: I1011 10:41:36.065811 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="2dd82f838b5636582534da82a3996ea6" podUID="e2a316e4240b2f9bcd91a14c93331da1" Oct 11 10:41:36.733402 master-2 kubenswrapper[4776]: I1011 10:41:36.732998 4776 generic.go:334] "Generic (PLEG): container finished" podID="f5c9d0dc-adaa-427d-9416-8b25d43673d0" containerID="9e3b264b36af8fb8203eaffb48028b74bc6997d9d895e272c171cb9caab5664f" exitCode=0 Oct 11 10:41:36.734184 master-2 kubenswrapper[4776]: I1011 10:41:36.734141 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-2" event={"ID":"f5c9d0dc-adaa-427d-9416-8b25d43673d0","Type":"ContainerDied","Data":"9e3b264b36af8fb8203eaffb48028b74bc6997d9d895e272c171cb9caab5664f"} Oct 11 10:41:38.098530 master-2 kubenswrapper[4776]: I1011 10:41:38.098475 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:38.297269 master-2 kubenswrapper[4776]: I1011 10:41:38.297163 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-var-lock\") pod \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " Oct 11 10:41:38.297550 master-2 kubenswrapper[4776]: I1011 10:41:38.297317 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kube-api-access\") pod \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " Oct 11 10:41:38.297550 master-2 kubenswrapper[4776]: I1011 10:41:38.297457 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kubelet-dir\") pod \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\" (UID: \"f5c9d0dc-adaa-427d-9416-8b25d43673d0\") " Oct 11 10:41:38.297550 master-2 kubenswrapper[4776]: I1011 10:41:38.297457 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-var-lock" (OuterVolumeSpecName: "var-lock") pod "f5c9d0dc-adaa-427d-9416-8b25d43673d0" (UID: "f5c9d0dc-adaa-427d-9416-8b25d43673d0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:41:38.297858 master-2 kubenswrapper[4776]: I1011 10:41:38.297717 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f5c9d0dc-adaa-427d-9416-8b25d43673d0" (UID: "f5c9d0dc-adaa-427d-9416-8b25d43673d0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:41:38.298015 master-2 kubenswrapper[4776]: I1011 10:41:38.297986 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:38.298064 master-2 kubenswrapper[4776]: I1011 10:41:38.298014 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5c9d0dc-adaa-427d-9416-8b25d43673d0-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:38.378881 master-2 kubenswrapper[4776]: I1011 10:41:38.378831 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f5c9d0dc-adaa-427d-9416-8b25d43673d0" (UID: "f5c9d0dc-adaa-427d-9416-8b25d43673d0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:41:38.403527 master-2 kubenswrapper[4776]: I1011 10:41:38.400038 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c9d0dc-adaa-427d-9416-8b25d43673d0-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:38.745907 master-2 kubenswrapper[4776]: I1011 10:41:38.745859 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/1.log" Oct 11 10:41:38.747764 master-2 kubenswrapper[4776]: I1011 10:41:38.747710 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/0.log" Oct 11 10:41:38.747764 master-2 kubenswrapper[4776]: I1011 10:41:38.747748 4776 generic.go:334] "Generic (PLEG): container finished" podID="2dd82f838b5636582534da82a3996ea6" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" exitCode=137 Oct 11 10:41:38.747863 master-2 kubenswrapper[4776]: I1011 10:41:38.747809 4776 scope.go:117] "RemoveContainer" containerID="f4e2e49582cba1c1c050448b4ca7740185e407bb7310d805fa88bb2ea911a4b1" Oct 11 10:41:38.749769 master-2 kubenswrapper[4776]: I1011 10:41:38.749738 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-2" event={"ID":"f5c9d0dc-adaa-427d-9416-8b25d43673d0","Type":"ContainerDied","Data":"8fbabddd40d44946c170e869c1c618cdec75e9cb6e63aa5167033a997e2748d9"} Oct 11 10:41:38.749769 master-2 kubenswrapper[4776]: I1011 10:41:38.749765 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fbabddd40d44946c170e869c1c618cdec75e9cb6e63aa5167033a997e2748d9" Oct 11 10:41:38.749885 master-2 kubenswrapper[4776]: I1011 10:41:38.749785 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-2" Oct 11 10:41:39.760664 master-2 kubenswrapper[4776]: I1011 10:41:39.760555 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/1.log" Oct 11 10:41:39.762338 master-2 kubenswrapper[4776]: I1011 10:41:39.762265 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="cluster-policy-controller" containerID="cri-o://8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" gracePeriod=30 Oct 11 10:41:39.762894 master-2 kubenswrapper[4776]: I1011 10:41:39.762391 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" gracePeriod=30 Oct 11 10:41:39.762894 master-2 kubenswrapper[4776]: I1011 10:41:39.762404 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" gracePeriod=30 Oct 11 10:41:39.763121 master-2 kubenswrapper[4776]: I1011 10:41:39.762428 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" containerID="cri-o://96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" gracePeriod=30 Oct 11 10:41:39.786288 master-2 kubenswrapper[4776]: I1011 10:41:39.773085 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="2dd82f838b5636582534da82a3996ea6" podUID="e2a316e4240b2f9bcd91a14c93331da1" Oct 11 10:41:39.949808 master-2 kubenswrapper[4776]: I1011 10:41:39.949749 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/2.log" Oct 11 10:41:39.950554 master-2 kubenswrapper[4776]: I1011 10:41:39.950524 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/1.log" Oct 11 10:41:39.951504 master-2 kubenswrapper[4776]: I1011 10:41:39.951479 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager-cert-syncer/0.log" Oct 11 10:41:39.952235 master-2 kubenswrapper[4776]: I1011 10:41:39.952196 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:39.957580 master-2 kubenswrapper[4776]: I1011 10:41:39.957522 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="2dd82f838b5636582534da82a3996ea6" podUID="e2a316e4240b2f9bcd91a14c93331da1" Oct 11 10:41:40.022884 master-2 kubenswrapper[4776]: I1011 10:41:40.022747 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-resource-dir\") pod \"2dd82f838b5636582534da82a3996ea6\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " Oct 11 10:41:40.023111 master-2 kubenswrapper[4776]: I1011 10:41:40.022866 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2dd82f838b5636582534da82a3996ea6" (UID: "2dd82f838b5636582534da82a3996ea6"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:41:40.023111 master-2 kubenswrapper[4776]: I1011 10:41:40.022936 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-cert-dir\") pod \"2dd82f838b5636582534da82a3996ea6\" (UID: \"2dd82f838b5636582534da82a3996ea6\") " Oct 11 10:41:40.023111 master-2 kubenswrapper[4776]: I1011 10:41:40.023015 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "2dd82f838b5636582534da82a3996ea6" (UID: "2dd82f838b5636582534da82a3996ea6"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:41:40.023511 master-2 kubenswrapper[4776]: I1011 10:41:40.023473 4776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-resource-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:40.023511 master-2 kubenswrapper[4776]: I1011 10:41:40.023507 4776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2dd82f838b5636582534da82a3996ea6-cert-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:40.065547 master-2 kubenswrapper[4776]: I1011 10:41:40.065472 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd82f838b5636582534da82a3996ea6" path="/var/lib/kubelet/pods/2dd82f838b5636582534da82a3996ea6/volumes" Oct 11 10:41:40.770448 master-2 kubenswrapper[4776]: I1011 10:41:40.770406 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/2.log" Oct 11 10:41:40.771088 master-2 kubenswrapper[4776]: I1011 10:41:40.771061 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager/1.log" Oct 11 10:41:40.772260 master-2 kubenswrapper[4776]: I1011 10:41:40.772240 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-2_2dd82f838b5636582534da82a3996ea6/kube-controller-manager-cert-syncer/0.log" Oct 11 10:41:40.772661 master-2 kubenswrapper[4776]: I1011 10:41:40.772638 4776 generic.go:334] "Generic (PLEG): container finished" podID="2dd82f838b5636582534da82a3996ea6" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" exitCode=2 Oct 11 10:41:40.772758 master-2 kubenswrapper[4776]: I1011 10:41:40.772744 4776 generic.go:334] "Generic (PLEG): container finished" podID="2dd82f838b5636582534da82a3996ea6" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" exitCode=0 Oct 11 10:41:40.772828 master-2 kubenswrapper[4776]: I1011 10:41:40.772717 4776 scope.go:117] "RemoveContainer" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" Oct 11 10:41:40.772872 master-2 kubenswrapper[4776]: I1011 10:41:40.772706 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:40.772914 master-2 kubenswrapper[4776]: I1011 10:41:40.772819 4776 generic.go:334] "Generic (PLEG): container finished" podID="2dd82f838b5636582534da82a3996ea6" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" exitCode=2 Oct 11 10:41:40.772985 master-2 kubenswrapper[4776]: I1011 10:41:40.772963 4776 generic.go:334] "Generic (PLEG): container finished" podID="2dd82f838b5636582534da82a3996ea6" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" exitCode=0 Oct 11 10:41:40.779570 master-2 kubenswrapper[4776]: I1011 10:41:40.779533 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" oldPodUID="2dd82f838b5636582534da82a3996ea6" podUID="e2a316e4240b2f9bcd91a14c93331da1" Oct 11 10:41:40.788083 master-2 kubenswrapper[4776]: I1011 10:41:40.788068 4776 scope.go:117] "RemoveContainer" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" Oct 11 10:41:40.806496 master-2 kubenswrapper[4776]: I1011 10:41:40.806282 4776 scope.go:117] "RemoveContainer" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" Oct 11 10:41:40.823516 master-2 kubenswrapper[4776]: I1011 10:41:40.823493 4776 scope.go:117] "RemoveContainer" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" Oct 11 10:41:40.841052 master-2 kubenswrapper[4776]: I1011 10:41:40.841005 4776 scope.go:117] "RemoveContainer" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" Oct 11 10:41:40.859950 master-2 kubenswrapper[4776]: I1011 10:41:40.859905 4776 scope.go:117] "RemoveContainer" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" Oct 11 10:41:40.860395 master-2 kubenswrapper[4776]: E1011 10:41:40.860354 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": container with ID starting with 96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3 not found: ID does not exist" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" Oct 11 10:41:40.860451 master-2 kubenswrapper[4776]: I1011 10:41:40.860400 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3"} err="failed to get container status \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": rpc error: code = NotFound desc = could not find container \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": container with ID starting with 96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3 not found: ID does not exist" Oct 11 10:41:40.860451 master-2 kubenswrapper[4776]: I1011 10:41:40.860430 4776 scope.go:117] "RemoveContainer" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" Oct 11 10:41:40.861000 master-2 kubenswrapper[4776]: E1011 10:41:40.860971 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": container with ID starting with 3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec not found: ID does not exist" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" Oct 11 10:41:40.861070 master-2 kubenswrapper[4776]: I1011 10:41:40.861005 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec"} err="failed to get container status \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": rpc error: code = NotFound desc = could not find container \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": container with ID starting with 3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec not found: ID does not exist" Oct 11 10:41:40.861070 master-2 kubenswrapper[4776]: I1011 10:41:40.861026 4776 scope.go:117] "RemoveContainer" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" Oct 11 10:41:40.861392 master-2 kubenswrapper[4776]: E1011 10:41:40.861365 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": container with ID starting with ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752 not found: ID does not exist" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" Oct 11 10:41:40.861495 master-2 kubenswrapper[4776]: I1011 10:41:40.861460 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752"} err="failed to get container status \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": rpc error: code = NotFound desc = could not find container \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": container with ID starting with ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752 not found: ID does not exist" Oct 11 10:41:40.862240 master-2 kubenswrapper[4776]: I1011 10:41:40.862224 4776 scope.go:117] "RemoveContainer" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" Oct 11 10:41:40.862955 master-2 kubenswrapper[4776]: E1011 10:41:40.862935 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": container with ID starting with 02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163 not found: ID does not exist" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" Oct 11 10:41:40.863044 master-2 kubenswrapper[4776]: I1011 10:41:40.863028 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163"} err="failed to get container status \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": rpc error: code = NotFound desc = could not find container \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": container with ID starting with 02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163 not found: ID does not exist" Oct 11 10:41:40.863127 master-2 kubenswrapper[4776]: I1011 10:41:40.863111 4776 scope.go:117] "RemoveContainer" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" Oct 11 10:41:40.863552 master-2 kubenswrapper[4776]: E1011 10:41:40.863525 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": container with ID starting with 8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4 not found: ID does not exist" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" Oct 11 10:41:40.863613 master-2 kubenswrapper[4776]: I1011 10:41:40.863555 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4"} err="failed to get container status \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": rpc error: code = NotFound desc = could not find container \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": container with ID starting with 8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4 not found: ID does not exist" Oct 11 10:41:40.863613 master-2 kubenswrapper[4776]: I1011 10:41:40.863571 4776 scope.go:117] "RemoveContainer" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" Oct 11 10:41:40.863892 master-2 kubenswrapper[4776]: I1011 10:41:40.863873 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3"} err="failed to get container status \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": rpc error: code = NotFound desc = could not find container \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": container with ID starting with 96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3 not found: ID does not exist" Oct 11 10:41:40.863978 master-2 kubenswrapper[4776]: I1011 10:41:40.863965 4776 scope.go:117] "RemoveContainer" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" Oct 11 10:41:40.864305 master-2 kubenswrapper[4776]: I1011 10:41:40.864280 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec"} err="failed to get container status \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": rpc error: code = NotFound desc = could not find container \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": container with ID starting with 3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec not found: ID does not exist" Oct 11 10:41:40.864305 master-2 kubenswrapper[4776]: I1011 10:41:40.864297 4776 scope.go:117] "RemoveContainer" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" Oct 11 10:41:40.864571 master-2 kubenswrapper[4776]: I1011 10:41:40.864530 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752"} err="failed to get container status \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": rpc error: code = NotFound desc = could not find container \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": container with ID starting with ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752 not found: ID does not exist" Oct 11 10:41:40.864571 master-2 kubenswrapper[4776]: I1011 10:41:40.864564 4776 scope.go:117] "RemoveContainer" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" Oct 11 10:41:40.864908 master-2 kubenswrapper[4776]: I1011 10:41:40.864880 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163"} err="failed to get container status \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": rpc error: code = NotFound desc = could not find container \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": container with ID starting with 02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163 not found: ID does not exist" Oct 11 10:41:40.864908 master-2 kubenswrapper[4776]: I1011 10:41:40.864903 4776 scope.go:117] "RemoveContainer" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" Oct 11 10:41:40.865259 master-2 kubenswrapper[4776]: I1011 10:41:40.865233 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4"} err="failed to get container status \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": rpc error: code = NotFound desc = could not find container \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": container with ID starting with 8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4 not found: ID does not exist" Oct 11 10:41:40.865338 master-2 kubenswrapper[4776]: I1011 10:41:40.865326 4776 scope.go:117] "RemoveContainer" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" Oct 11 10:41:40.865710 master-2 kubenswrapper[4776]: I1011 10:41:40.865653 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3"} err="failed to get container status \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": rpc error: code = NotFound desc = could not find container \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": container with ID starting with 96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3 not found: ID does not exist" Oct 11 10:41:40.865710 master-2 kubenswrapper[4776]: I1011 10:41:40.865704 4776 scope.go:117] "RemoveContainer" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" Oct 11 10:41:40.866038 master-2 kubenswrapper[4776]: I1011 10:41:40.866009 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec"} err="failed to get container status \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": rpc error: code = NotFound desc = could not find container \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": container with ID starting with 3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec not found: ID does not exist" Oct 11 10:41:40.866038 master-2 kubenswrapper[4776]: I1011 10:41:40.866029 4776 scope.go:117] "RemoveContainer" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" Oct 11 10:41:40.866445 master-2 kubenswrapper[4776]: I1011 10:41:40.866424 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752"} err="failed to get container status \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": rpc error: code = NotFound desc = could not find container \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": container with ID starting with ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752 not found: ID does not exist" Oct 11 10:41:40.866445 master-2 kubenswrapper[4776]: I1011 10:41:40.866441 4776 scope.go:117] "RemoveContainer" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" Oct 11 10:41:40.866723 master-2 kubenswrapper[4776]: I1011 10:41:40.866704 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163"} err="failed to get container status \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": rpc error: code = NotFound desc = could not find container \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": container with ID starting with 02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163 not found: ID does not exist" Oct 11 10:41:40.866793 master-2 kubenswrapper[4776]: I1011 10:41:40.866780 4776 scope.go:117] "RemoveContainer" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" Oct 11 10:41:40.867129 master-2 kubenswrapper[4776]: I1011 10:41:40.867078 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4"} err="failed to get container status \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": rpc error: code = NotFound desc = could not find container \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": container with ID starting with 8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4 not found: ID does not exist" Oct 11 10:41:40.867129 master-2 kubenswrapper[4776]: I1011 10:41:40.867107 4776 scope.go:117] "RemoveContainer" containerID="96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3" Oct 11 10:41:40.867553 master-2 kubenswrapper[4776]: I1011 10:41:40.867510 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3"} err="failed to get container status \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": rpc error: code = NotFound desc = could not find container \"96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3\": container with ID starting with 96247defb8c53c90b69a5b7a37a22e0496aca0c4cbf7896563d9cf196c5672d3 not found: ID does not exist" Oct 11 10:41:40.867553 master-2 kubenswrapper[4776]: I1011 10:41:40.867531 4776 scope.go:117] "RemoveContainer" containerID="3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec" Oct 11 10:41:40.867747 master-2 kubenswrapper[4776]: I1011 10:41:40.867717 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec"} err="failed to get container status \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": rpc error: code = NotFound desc = could not find container \"3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec\": container with ID starting with 3a546f6b7b23be381a364e5fbc1bec857d150fadab8ab7cdcdd76abc62818cec not found: ID does not exist" Oct 11 10:41:40.867747 master-2 kubenswrapper[4776]: I1011 10:41:40.867735 4776 scope.go:117] "RemoveContainer" containerID="ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752" Oct 11 10:41:40.868081 master-2 kubenswrapper[4776]: I1011 10:41:40.868041 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752"} err="failed to get container status \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": rpc error: code = NotFound desc = could not find container \"ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752\": container with ID starting with ce5ca8906299ccb6f43a96a3dd3cc1fec1295e001dc80fff332070396e764752 not found: ID does not exist" Oct 11 10:41:40.868081 master-2 kubenswrapper[4776]: I1011 10:41:40.868068 4776 scope.go:117] "RemoveContainer" containerID="02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163" Oct 11 10:41:40.868308 master-2 kubenswrapper[4776]: I1011 10:41:40.868288 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163"} err="failed to get container status \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": rpc error: code = NotFound desc = could not find container \"02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163\": container with ID starting with 02cf56c45222d51d26965bf91f2a0877eae54147d6bd0d905f1268b116a1d163 not found: ID does not exist" Oct 11 10:41:40.868524 master-2 kubenswrapper[4776]: I1011 10:41:40.868512 4776 scope.go:117] "RemoveContainer" containerID="8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4" Oct 11 10:41:40.868863 master-2 kubenswrapper[4776]: I1011 10:41:40.868822 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4"} err="failed to get container status \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": rpc error: code = NotFound desc = could not find container \"8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4\": container with ID starting with 8e40956fa1bbe7741f7826ad8bbce5875122b6b76ae53361947dae1c4fa15ae4 not found: ID does not exist" Oct 11 10:41:41.465252 master-2 kubenswrapper[4776]: I1011 10:41:41.465171 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:41.465595 master-2 kubenswrapper[4776]: I1011 10:41:41.465287 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:46.158059 master-2 kubenswrapper[4776]: I1011 10:41:46.157921 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-76f8bc4746-5jp5k" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" containerID="cri-o://ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f" gracePeriod=15 Oct 11 10:41:46.464709 master-2 kubenswrapper[4776]: I1011 10:41:46.464398 4776 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" start-of-body= Oct 11 10:41:46.464709 master-2 kubenswrapper[4776]: I1011 10:41:46.464462 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID="a5e255b2-14b3-42ed-9396-f96c40e231c0" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:10257/healthz\": dial tcp 192.168.34.12:10257: connect: connection refused" Oct 11 10:41:46.700087 master-2 kubenswrapper[4776]: I1011 10:41:46.699982 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76f8bc4746-5jp5k_9c72970e-d35b-4f28-8291-e3ed3683c59c/console/0.log" Oct 11 10:41:46.700087 master-2 kubenswrapper[4776]: I1011 10:41:46.700048 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:41:46.757904 master-2 kubenswrapper[4776]: I1011 10:41:46.757696 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b846b7bb4-7q7ph"] Oct 11 10:41:46.758124 master-2 kubenswrapper[4776]: E1011 10:41:46.758094 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5c9d0dc-adaa-427d-9416-8b25d43673d0" containerName="installer" Oct 11 10:41:46.758124 master-2 kubenswrapper[4776]: I1011 10:41:46.758111 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5c9d0dc-adaa-427d-9416-8b25d43673d0" containerName="installer" Oct 11 10:41:46.758220 master-2 kubenswrapper[4776]: E1011 10:41:46.758131 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" Oct 11 10:41:46.758220 master-2 kubenswrapper[4776]: I1011 10:41:46.758139 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" Oct 11 10:41:46.758220 master-2 kubenswrapper[4776]: E1011 10:41:46.758150 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:46.758220 master-2 kubenswrapper[4776]: I1011 10:41:46.758159 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:46.758398 master-2 kubenswrapper[4776]: I1011 10:41:46.758370 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5c9d0dc-adaa-427d-9416-8b25d43673d0" containerName="installer" Oct 11 10:41:46.758398 master-2 kubenswrapper[4776]: I1011 10:41:46.758396 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd82f838b5636582534da82a3996ea6" containerName="kube-controller-manager" Oct 11 10:41:46.758465 master-2 kubenswrapper[4776]: I1011 10:41:46.758410 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerName="console" Oct 11 10:41:46.759060 master-2 kubenswrapper[4776]: I1011 10:41:46.759029 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.827101 master-2 kubenswrapper[4776]: I1011 10:41:46.827037 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76f8bc4746-5jp5k_9c72970e-d35b-4f28-8291-e3ed3683c59c/console/0.log" Oct 11 10:41:46.827433 master-2 kubenswrapper[4776]: I1011 10:41:46.827192 4776 generic.go:334] "Generic (PLEG): container finished" podID="9c72970e-d35b-4f28-8291-e3ed3683c59c" containerID="ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f" exitCode=2 Oct 11 10:41:46.827433 master-2 kubenswrapper[4776]: I1011 10:41:46.827219 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76f8bc4746-5jp5k" Oct 11 10:41:46.827433 master-2 kubenswrapper[4776]: I1011 10:41:46.827254 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f8bc4746-5jp5k" event={"ID":"9c72970e-d35b-4f28-8291-e3ed3683c59c","Type":"ContainerDied","Data":"ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f"} Oct 11 10:41:46.827433 master-2 kubenswrapper[4776]: I1011 10:41:46.827320 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76f8bc4746-5jp5k" event={"ID":"9c72970e-d35b-4f28-8291-e3ed3683c59c","Type":"ContainerDied","Data":"3717643475eebdbec50aa27932ca525c2e2f047c2a23862ba4394759fc5478d9"} Oct 11 10:41:46.827433 master-2 kubenswrapper[4776]: I1011 10:41:46.827357 4776 scope.go:117] "RemoveContainer" containerID="ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f" Oct 11 10:41:46.828541 master-2 kubenswrapper[4776]: I1011 10:41:46.828472 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-serving-cert\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.828541 master-2 kubenswrapper[4776]: I1011 10:41:46.828538 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-oauth-config\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.828624 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-config\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.828665 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-trusted-ca-bundle\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.828723 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-service-ca\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.828749 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-oauth-serving-cert\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.828816 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nf86\" (UniqueName: \"kubernetes.io/projected/9c72970e-d35b-4f28-8291-e3ed3683c59c-kube-api-access-7nf86\") pod \"9c72970e-d35b-4f28-8291-e3ed3683c59c\" (UID: \"9c72970e-d35b-4f28-8291-e3ed3683c59c\") " Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829048 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-oauth-config\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829106 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-service-ca\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829138 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-trusted-ca-bundle\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829163 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-serving-cert\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829196 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-oauth-serving-cert\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829220 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74gcw\" (UniqueName: \"kubernetes.io/projected/e0a2e987-f2d6-410a-966a-bd82ab791c00-kube-api-access-74gcw\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.829316 master-2 kubenswrapper[4776]: I1011 10:41:46.829264 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-config\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.830959 master-2 kubenswrapper[4776]: I1011 10:41:46.829363 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-config" (OuterVolumeSpecName: "console-config") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:46.830959 master-2 kubenswrapper[4776]: I1011 10:41:46.829496 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:46.830959 master-2 kubenswrapper[4776]: I1011 10:41:46.829645 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:46.831260 master-2 kubenswrapper[4776]: I1011 10:41:46.831118 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-service-ca" (OuterVolumeSpecName: "service-ca") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:41:46.832169 master-2 kubenswrapper[4776]: I1011 10:41:46.832106 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:41:46.833709 master-2 kubenswrapper[4776]: I1011 10:41:46.833598 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c72970e-d35b-4f28-8291-e3ed3683c59c-kube-api-access-7nf86" (OuterVolumeSpecName: "kube-api-access-7nf86") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "kube-api-access-7nf86". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:41:46.834392 master-2 kubenswrapper[4776]: I1011 10:41:46.834347 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9c72970e-d35b-4f28-8291-e3ed3683c59c" (UID: "9c72970e-d35b-4f28-8291-e3ed3683c59c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:41:46.892693 master-2 kubenswrapper[4776]: I1011 10:41:46.892607 4776 scope.go:117] "RemoveContainer" containerID="ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f" Oct 11 10:41:46.893586 master-2 kubenswrapper[4776]: E1011 10:41:46.893508 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f\": container with ID starting with ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f not found: ID does not exist" containerID="ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f" Oct 11 10:41:46.893656 master-2 kubenswrapper[4776]: I1011 10:41:46.893603 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f"} err="failed to get container status \"ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f\": rpc error: code = NotFound desc = could not find container \"ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f\": container with ID starting with ef7498852bb03ff3457e40f4ab81f6745319df9c3ce9d39c54dcedfe233edf0f not found: ID does not exist" Oct 11 10:41:46.930950 master-2 kubenswrapper[4776]: I1011 10:41:46.930878 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-service-ca\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.930950 master-2 kubenswrapper[4776]: I1011 10:41:46.930949 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-serving-cert\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.930978 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-trusted-ca-bundle\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.931009 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-oauth-serving-cert\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.931027 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74gcw\" (UniqueName: \"kubernetes.io/projected/e0a2e987-f2d6-410a-966a-bd82ab791c00-kube-api-access-74gcw\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.931067 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-config\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.931103 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-oauth-config\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.931174 4776 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931185 master-2 kubenswrapper[4776]: I1011 10:41:46.931186 4776 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-oauth-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931404 master-2 kubenswrapper[4776]: I1011 10:41:46.931197 4776 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-console-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931404 master-2 kubenswrapper[4776]: I1011 10:41:46.931212 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931404 master-2 kubenswrapper[4776]: I1011 10:41:46.931225 4776 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-service-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931404 master-2 kubenswrapper[4776]: I1011 10:41:46.931236 4776 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c72970e-d35b-4f28-8291-e3ed3683c59c-oauth-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931404 master-2 kubenswrapper[4776]: I1011 10:41:46.931248 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nf86\" (UniqueName: \"kubernetes.io/projected/9c72970e-d35b-4f28-8291-e3ed3683c59c-kube-api-access-7nf86\") on node \"master-2\" DevicePath \"\"" Oct 11 10:41:46.931934 master-2 kubenswrapper[4776]: I1011 10:41:46.931879 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-service-ca\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.932909 master-2 kubenswrapper[4776]: I1011 10:41:46.932844 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-config\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.933287 master-2 kubenswrapper[4776]: I1011 10:41:46.933249 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-trusted-ca-bundle\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.934624 master-2 kubenswrapper[4776]: I1011 10:41:46.934547 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-oauth-serving-cert\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.934778 master-2 kubenswrapper[4776]: I1011 10:41:46.934741 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-oauth-config\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.938941 master-2 kubenswrapper[4776]: I1011 10:41:46.938886 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-serving-cert\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:46.959724 master-2 kubenswrapper[4776]: I1011 10:41:46.959544 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74gcw\" (UniqueName: \"kubernetes.io/projected/e0a2e987-f2d6-410a-966a-bd82ab791c00-kube-api-access-74gcw\") pod \"console-5b846b7bb4-7q7ph\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:47.016137 master-2 kubenswrapper[4776]: I1011 10:41:47.016056 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b846b7bb4-7q7ph"] Oct 11 10:41:47.058093 master-2 kubenswrapper[4776]: I1011 10:41:47.057983 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:47.078267 master-2 kubenswrapper[4776]: I1011 10:41:47.077790 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:47.083292 master-2 kubenswrapper[4776]: I1011 10:41:47.083246 4776 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="bbf811db-ddc6-4cfb-9181-057546f4c7bd" Oct 11 10:41:47.083292 master-2 kubenswrapper[4776]: I1011 10:41:47.083281 4776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podUID="bbf811db-ddc6-4cfb-9181-057546f4c7bd" Oct 11 10:41:47.274998 master-2 kubenswrapper[4776]: I1011 10:41:47.274906 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:41:47.390287 master-2 kubenswrapper[4776]: I1011 10:41:47.390238 4776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:47.416250 master-2 kubenswrapper[4776]: I1011 10:41:47.416208 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76f8bc4746-5jp5k"] Oct 11 10:41:47.433125 master-2 kubenswrapper[4776]: I1011 10:41:47.433031 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:41:47.492410 master-2 kubenswrapper[4776]: I1011 10:41:47.492263 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-76f8bc4746-5jp5k"] Oct 11 10:41:47.519654 master-2 kubenswrapper[4776]: I1011 10:41:47.518052 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:47.530392 master-2 kubenswrapper[4776]: W1011 10:41:47.530334 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0a2e987_f2d6_410a_966a_bd82ab791c00.slice/crio-43085277e27210405939bccfc3c1c615afc3a4069b47f8fc3de8344908605eaa WatchSource:0}: Error finding container 43085277e27210405939bccfc3c1c615afc3a4069b47f8fc3de8344908605eaa: Status 404 returned error can't find the container with id 43085277e27210405939bccfc3c1c615afc3a4069b47f8fc3de8344908605eaa Oct 11 10:41:47.531343 master-2 kubenswrapper[4776]: I1011 10:41:47.531318 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b846b7bb4-7q7ph"] Oct 11 10:41:47.542154 master-2 kubenswrapper[4776]: I1011 10:41:47.542120 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-2"] Oct 11 10:41:47.546262 master-2 kubenswrapper[4776]: W1011 10:41:47.546229 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2a316e4240b2f9bcd91a14c93331da1.slice/crio-e2533c3f52c5a4d170bae783110ce88d21f90e2de86a07d24975ca02c5c10002 WatchSource:0}: Error finding container e2533c3f52c5a4d170bae783110ce88d21f90e2de86a07d24975ca02c5c10002: Status 404 returned error can't find the container with id e2533c3f52c5a4d170bae783110ce88d21f90e2de86a07d24975ca02c5c10002 Oct 11 10:41:47.840366 master-2 kubenswrapper[4776]: I1011 10:41:47.840309 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"e2a316e4240b2f9bcd91a14c93331da1","Type":"ContainerStarted","Data":"9189b059d2886230b84e1fc6455d591be19246958f0bbf9d6b5c50b947a7be8d"} Oct 11 10:41:47.840614 master-2 kubenswrapper[4776]: I1011 10:41:47.840368 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"e2a316e4240b2f9bcd91a14c93331da1","Type":"ContainerStarted","Data":"e2533c3f52c5a4d170bae783110ce88d21f90e2de86a07d24975ca02c5c10002"} Oct 11 10:41:47.843583 master-2 kubenswrapper[4776]: I1011 10:41:47.843550 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b846b7bb4-7q7ph" event={"ID":"e0a2e987-f2d6-410a-966a-bd82ab791c00","Type":"ContainerStarted","Data":"b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f"} Oct 11 10:41:47.843709 master-2 kubenswrapper[4776]: I1011 10:41:47.843695 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b846b7bb4-7q7ph" event={"ID":"e0a2e987-f2d6-410a-966a-bd82ab791c00","Type":"ContainerStarted","Data":"43085277e27210405939bccfc3c1c615afc3a4069b47f8fc3de8344908605eaa"} Oct 11 10:41:47.870021 master-2 kubenswrapper[4776]: I1011 10:41:47.869933 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b846b7bb4-7q7ph" podStartSLOduration=26.86991216 podStartE2EDuration="26.86991216s" podCreationTimestamp="2025-10-11 10:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:41:47.868130793 +0000 UTC m=+942.652557502" watchObservedRunningTime="2025-10-11 10:41:47.86991216 +0000 UTC m=+942.654338869" Oct 11 10:41:48.079777 master-2 kubenswrapper[4776]: I1011 10:41:48.076234 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c72970e-d35b-4f28-8291-e3ed3683c59c" path="/var/lib/kubelet/pods/9c72970e-d35b-4f28-8291-e3ed3683c59c/volumes" Oct 11 10:41:48.860418 master-2 kubenswrapper[4776]: I1011 10:41:48.860239 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"e2a316e4240b2f9bcd91a14c93331da1","Type":"ContainerStarted","Data":"f06c4e8acf494fe24eccc1ce43df9eb229509ce5cd888092b73e7eba38862d46"} Oct 11 10:41:48.860418 master-2 kubenswrapper[4776]: I1011 10:41:48.860350 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"e2a316e4240b2f9bcd91a14c93331da1","Type":"ContainerStarted","Data":"82e9c03cc0b9f224b8b66ac568bd942c0785a5327a13aa88e776428fd4d3d837"} Oct 11 10:41:48.860418 master-2 kubenswrapper[4776]: I1011 10:41:48.860376 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" event={"ID":"e2a316e4240b2f9bcd91a14c93331da1","Type":"ContainerStarted","Data":"9678e04476c4e6953c7acc1060e1d800851772e39435db54daec2daa1356b64e"} Oct 11 10:41:48.902694 master-2 kubenswrapper[4776]: I1011 10:41:48.902562 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" podStartSLOduration=1.902541726 podStartE2EDuration="1.902541726s" podCreationTimestamp="2025-10-11 10:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:41:48.899184776 +0000 UTC m=+943.683611485" watchObservedRunningTime="2025-10-11 10:41:48.902541726 +0000 UTC m=+943.686968435" Oct 11 10:41:57.078789 master-2 kubenswrapper[4776]: I1011 10:41:57.078719 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:57.078789 master-2 kubenswrapper[4776]: I1011 10:41:57.078797 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:41:57.081473 master-2 kubenswrapper[4776]: I1011 10:41:57.081419 4776 patch_prober.go:28] interesting pod/console-5b846b7bb4-7q7ph container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Oct 11 10:41:57.081561 master-2 kubenswrapper[4776]: I1011 10:41:57.081482 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b846b7bb4-7q7ph" podUID="e0a2e987-f2d6-410a-966a-bd82ab791c00" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Oct 11 10:41:57.519004 master-2 kubenswrapper[4776]: I1011 10:41:57.518875 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:57.519004 master-2 kubenswrapper[4776]: I1011 10:41:57.518996 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:57.519270 master-2 kubenswrapper[4776]: I1011 10:41:57.519024 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:57.519270 master-2 kubenswrapper[4776]: I1011 10:41:57.519045 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:57.524822 master-2 kubenswrapper[4776]: I1011 10:41:57.524766 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:57.525914 master-2 kubenswrapper[4776]: I1011 10:41:57.525840 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:57.935350 master-2 kubenswrapper[4776]: I1011 10:41:57.935291 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:41:58.943844 master-2 kubenswrapper[4776]: I1011 10:41:58.943800 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" Oct 11 10:42:07.084184 master-2 kubenswrapper[4776]: I1011 10:42:07.084042 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:42:07.088621 master-2 kubenswrapper[4776]: I1011 10:42:07.088584 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:42:50.899992 master-2 kubenswrapper[4776]: I1011 10:42:50.899877 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-5xgzs"] Oct 11 10:42:50.900714 master-2 kubenswrapper[4776]: I1011 10:42:50.900114 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" containerID="cri-o://0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee" gracePeriod=120 Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: I1011 10:42:55.890397 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:42:55.890481 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:42:55.891513 master-2 kubenswrapper[4776]: I1011 10:42:55.890487 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: I1011 10:43:00.893579 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:00.893645 master-2 kubenswrapper[4776]: I1011 10:43:00.893644 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: I1011 10:43:05.889880 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:05.889940 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:05.891153 master-2 kubenswrapper[4776]: I1011 10:43:05.889957 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:05.891153 master-2 kubenswrapper[4776]: I1011 10:43:05.890048 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: I1011 10:43:10.894846 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:10.894988 master-2 kubenswrapper[4776]: I1011 10:43:10.894924 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: I1011 10:43:15.890902 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:15.891010 master-2 kubenswrapper[4776]: I1011 10:43:15.890989 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:17.769359 master-2 kubenswrapper[4776]: I1011 10:43:17.769301 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-v6dfc_8757af56-20fb-439e-adba-7e4e50378936/assisted-installer-controller/0.log" Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: I1011 10:43:20.891763 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:20.891860 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:20.892897 master-2 kubenswrapper[4776]: I1011 10:43:20.891879 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: I1011 10:43:25.891288 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:25.891336 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:25.892364 master-2 kubenswrapper[4776]: I1011 10:43:25.892335 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: I1011 10:43:30.892652 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:30.892809 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:30.894238 master-2 kubenswrapper[4776]: I1011 10:43:30.892846 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: I1011 10:43:35.890855 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:35.890926 master-2 kubenswrapper[4776]: I1011 10:43:35.890917 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: I1011 10:43:40.889820 4776 patch_prober.go:28] interesting pod/apiserver-656768b4df-5xgzs container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:43:40.889992 master-2 kubenswrapper[4776]: I1011 10:43:40.889901 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:43:43.376274 master-2 kubenswrapper[4776]: I1011 10:43:43.376212 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:43:43.442663 master-2 kubenswrapper[4776]: I1011 10:43:43.442287 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc"] Oct 11 10:43:43.442663 master-2 kubenswrapper[4776]: E1011 10:43:43.442653 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" Oct 11 10:43:43.442663 master-2 kubenswrapper[4776]: I1011 10:43:43.442688 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" Oct 11 10:43:43.442663 master-2 kubenswrapper[4776]: E1011 10:43:43.442716 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="fix-audit-permissions" Oct 11 10:43:43.442663 master-2 kubenswrapper[4776]: I1011 10:43:43.442724 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="fix-audit-permissions" Oct 11 10:43:43.443278 master-2 kubenswrapper[4776]: I1011 10:43:43.442889 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerName="oauth-apiserver" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.447372 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448087 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-dir\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448556 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-serving-cert\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448583 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-policies\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448621 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cjsj\" (UniqueName: \"kubernetes.io/projected/407e7df9-fbe8-44b1-8dde-bafa356e904c-kube-api-access-4cjsj\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448645 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-encryption-config\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448737 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-client\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448801 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-trusted-ca-bundle\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.448905 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-serving-ca\") pod \"407e7df9-fbe8-44b1-8dde-bafa356e904c\" (UID: \"407e7df9-fbe8-44b1-8dde-bafa356e904c\") " Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449041 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-audit-policies\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449066 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr7j7\" (UniqueName: \"kubernetes.io/projected/1d346790-931a-4f91-b588-0b6249da0cd0-kube-api-access-zr7j7\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449088 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449107 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-etcd-client\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449146 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-serving-cert\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449172 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d346790-931a-4f91-b588-0b6249da0cd0-audit-dir\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449248 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-encryption-config\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449282 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.449357 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:43:43.452741 master-2 kubenswrapper[4776]: I1011 10:43:43.452445 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc"] Oct 11 10:43:43.456364 master-2 kubenswrapper[4776]: I1011 10:43:43.456166 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:43:43.459670 master-2 kubenswrapper[4776]: I1011 10:43:43.456586 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:43:43.459670 master-2 kubenswrapper[4776]: I1011 10:43:43.456951 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:43:43.459670 master-2 kubenswrapper[4776]: I1011 10:43:43.458886 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:43:43.460802 master-2 kubenswrapper[4776]: I1011 10:43:43.460747 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:43:43.462048 master-2 kubenswrapper[4776]: I1011 10:43:43.461840 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:43:43.467713 master-2 kubenswrapper[4776]: I1011 10:43:43.467116 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407e7df9-fbe8-44b1-8dde-bafa356e904c-kube-api-access-4cjsj" (OuterVolumeSpecName: "kube-api-access-4cjsj") pod "407e7df9-fbe8-44b1-8dde-bafa356e904c" (UID: "407e7df9-fbe8-44b1-8dde-bafa356e904c"). InnerVolumeSpecName "kube-api-access-4cjsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:43:43.551367 master-2 kubenswrapper[4776]: I1011 10:43:43.551277 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-serving-cert\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.552008 master-2 kubenswrapper[4776]: I1011 10:43:43.551967 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d346790-931a-4f91-b588-0b6249da0cd0-audit-dir\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.552155 master-2 kubenswrapper[4776]: I1011 10:43:43.552137 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-encryption-config\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.552320 master-2 kubenswrapper[4776]: I1011 10:43:43.552303 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.552475 master-2 kubenswrapper[4776]: I1011 10:43:43.552458 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-audit-policies\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.552971 master-2 kubenswrapper[4776]: I1011 10:43:43.552354 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d346790-931a-4f91-b588-0b6249da0cd0-audit-dir\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.553052 master-2 kubenswrapper[4776]: I1011 10:43:43.552977 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr7j7\" (UniqueName: \"kubernetes.io/projected/1d346790-931a-4f91-b588-0b6249da0cd0-kube-api-access-zr7j7\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.553110 master-2 kubenswrapper[4776]: I1011 10:43:43.553090 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.553156 master-2 kubenswrapper[4776]: I1011 10:43:43.553116 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-etcd-client\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.553231 master-2 kubenswrapper[4776]: I1011 10:43:43.553207 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553231 master-2 kubenswrapper[4776]: I1011 10:43:43.553226 4776 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-policies\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553326 master-2 kubenswrapper[4776]: I1011 10:43:43.553239 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cjsj\" (UniqueName: \"kubernetes.io/projected/407e7df9-fbe8-44b1-8dde-bafa356e904c-kube-api-access-4cjsj\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553326 master-2 kubenswrapper[4776]: I1011 10:43:43.553251 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553326 master-2 kubenswrapper[4776]: I1011 10:43:43.553262 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553326 master-2 kubenswrapper[4776]: I1011 10:43:43.553271 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553326 master-2 kubenswrapper[4776]: I1011 10:43:43.553280 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/407e7df9-fbe8-44b1-8dde-bafa356e904c-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553326 master-2 kubenswrapper[4776]: I1011 10:43:43.553290 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/407e7df9-fbe8-44b1-8dde-bafa356e904c-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:43:43.553563 master-2 kubenswrapper[4776]: I1011 10:43:43.553337 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-trusted-ca-bundle\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.553931 master-2 kubenswrapper[4776]: I1011 10:43:43.553898 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-etcd-serving-ca\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.554894 master-2 kubenswrapper[4776]: I1011 10:43:43.554872 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d346790-931a-4f91-b588-0b6249da0cd0-audit-policies\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.555928 master-2 kubenswrapper[4776]: I1011 10:43:43.555869 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-serving-cert\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.557326 master-2 kubenswrapper[4776]: I1011 10:43:43.557269 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-etcd-client\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.557413 master-2 kubenswrapper[4776]: I1011 10:43:43.557384 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d346790-931a-4f91-b588-0b6249da0cd0-encryption-config\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.570966 master-2 kubenswrapper[4776]: I1011 10:43:43.570903 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr7j7\" (UniqueName: \"kubernetes.io/projected/1d346790-931a-4f91-b588-0b6249da0cd0-kube-api-access-zr7j7\") pod \"apiserver-68f4c55ff4-hr9gc\" (UID: \"1d346790-931a-4f91-b588-0b6249da0cd0\") " pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:43.626740 master-2 kubenswrapper[4776]: I1011 10:43:43.626574 4776 generic.go:334] "Generic (PLEG): container finished" podID="407e7df9-fbe8-44b1-8dde-bafa356e904c" containerID="0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee" exitCode=0 Oct 11 10:43:43.626740 master-2 kubenswrapper[4776]: I1011 10:43:43.626662 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" Oct 11 10:43:43.626997 master-2 kubenswrapper[4776]: I1011 10:43:43.626644 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" event={"ID":"407e7df9-fbe8-44b1-8dde-bafa356e904c","Type":"ContainerDied","Data":"0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee"} Oct 11 10:43:43.626997 master-2 kubenswrapper[4776]: I1011 10:43:43.626901 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-656768b4df-5xgzs" event={"ID":"407e7df9-fbe8-44b1-8dde-bafa356e904c","Type":"ContainerDied","Data":"4696d703bfc528a3bf9bd99fc217e6dc2e1faa3cb905d36cd446e1df3ecf761e"} Oct 11 10:43:43.626997 master-2 kubenswrapper[4776]: I1011 10:43:43.626954 4776 scope.go:117] "RemoveContainer" containerID="0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee" Oct 11 10:43:43.646595 master-2 kubenswrapper[4776]: I1011 10:43:43.646559 4776 scope.go:117] "RemoveContainer" containerID="e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960" Oct 11 10:43:43.662691 master-2 kubenswrapper[4776]: I1011 10:43:43.662617 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-5xgzs"] Oct 11 10:43:43.667816 master-2 kubenswrapper[4776]: I1011 10:43:43.667771 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-656768b4df-5xgzs"] Oct 11 10:43:43.687059 master-2 kubenswrapper[4776]: I1011 10:43:43.687011 4776 scope.go:117] "RemoveContainer" containerID="0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee" Oct 11 10:43:43.687379 master-2 kubenswrapper[4776]: E1011 10:43:43.687342 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee\": container with ID starting with 0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee not found: ID does not exist" containerID="0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee" Oct 11 10:43:43.687443 master-2 kubenswrapper[4776]: I1011 10:43:43.687384 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee"} err="failed to get container status \"0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee\": rpc error: code = NotFound desc = could not find container \"0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee\": container with ID starting with 0e1fe8bf7dd94d3ed3335b8e92514100ad3378b6c0cf3662b54c8735c53e53ee not found: ID does not exist" Oct 11 10:43:43.687443 master-2 kubenswrapper[4776]: I1011 10:43:43.687411 4776 scope.go:117] "RemoveContainer" containerID="e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960" Oct 11 10:43:43.687844 master-2 kubenswrapper[4776]: E1011 10:43:43.687791 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960\": container with ID starting with e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960 not found: ID does not exist" containerID="e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960" Oct 11 10:43:43.687924 master-2 kubenswrapper[4776]: I1011 10:43:43.687843 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960"} err="failed to get container status \"e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960\": rpc error: code = NotFound desc = could not find container \"e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960\": container with ID starting with e22a6e86526375bc0d8d2c641b58225fad6eb7321af7924f677c48967d81f960 not found: ID does not exist" Oct 11 10:43:43.810265 master-2 kubenswrapper[4776]: I1011 10:43:43.810205 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:44.068438 master-2 kubenswrapper[4776]: I1011 10:43:44.068368 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407e7df9-fbe8-44b1-8dde-bafa356e904c" path="/var/lib/kubelet/pods/407e7df9-fbe8-44b1-8dde-bafa356e904c/volumes" Oct 11 10:43:44.248580 master-2 kubenswrapper[4776]: I1011 10:43:44.248310 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc"] Oct 11 10:43:44.636763 master-2 kubenswrapper[4776]: I1011 10:43:44.636694 4776 generic.go:334] "Generic (PLEG): container finished" podID="1d346790-931a-4f91-b588-0b6249da0cd0" containerID="cc4f498c69ea9832f019d9f23d32281793ea486570689a8233eff5246e1c7c73" exitCode=0 Oct 11 10:43:44.637484 master-2 kubenswrapper[4776]: I1011 10:43:44.636746 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" event={"ID":"1d346790-931a-4f91-b588-0b6249da0cd0","Type":"ContainerDied","Data":"cc4f498c69ea9832f019d9f23d32281793ea486570689a8233eff5246e1c7c73"} Oct 11 10:43:44.637484 master-2 kubenswrapper[4776]: I1011 10:43:44.637150 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" event={"ID":"1d346790-931a-4f91-b588-0b6249da0cd0","Type":"ContainerStarted","Data":"2f76a0e340bec7b367b4fa74984435e2b6f88b42f1e2ce019ae496424c0079da"} Oct 11 10:43:45.645537 master-2 kubenswrapper[4776]: I1011 10:43:45.645501 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" event={"ID":"1d346790-931a-4f91-b588-0b6249da0cd0","Type":"ContainerStarted","Data":"bf065c533130b8217afb792e067a727f075aaa82c3345db41a96954b3ceff80f"} Oct 11 10:43:45.670990 master-2 kubenswrapper[4776]: I1011 10:43:45.670882 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" podStartSLOduration=55.670859856 podStartE2EDuration="55.670859856s" podCreationTimestamp="2025-10-11 10:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:43:45.667267261 +0000 UTC m=+1060.451693980" watchObservedRunningTime="2025-10-11 10:43:45.670859856 +0000 UTC m=+1060.455286575" Oct 11 10:43:48.810875 master-2 kubenswrapper[4776]: I1011 10:43:48.810818 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:48.810875 master-2 kubenswrapper[4776]: I1011 10:43:48.810876 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:48.819549 master-2 kubenswrapper[4776]: I1011 10:43:48.819440 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:43:49.679056 master-2 kubenswrapper[4776]: I1011 10:43:49.678976 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc" Oct 11 10:44:07.355276 master-2 kubenswrapper[4776]: I1011 10:44:07.355212 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b846b7bb4-7q7ph"] Oct 11 10:44:13.620632 master-2 kubenswrapper[4776]: I1011 10:44:13.620311 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-2"] Oct 11 10:44:13.621390 master-2 kubenswrapper[4776]: I1011 10:44:13.621114 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.624768 master-2 kubenswrapper[4776]: I1011 10:44:13.623835 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-t28rg" Oct 11 10:44:13.636559 master-2 kubenswrapper[4776]: I1011 10:44:13.636488 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-2"] Oct 11 10:44:13.677332 master-2 kubenswrapper[4776]: I1011 10:44:13.677187 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.677858 master-2 kubenswrapper[4776]: I1011 10:44:13.677398 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6df740-3969-4dd7-8953-2c21514694b8-kube-api-access\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.677858 master-2 kubenswrapper[4776]: I1011 10:44:13.677486 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-var-lock\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.778913 master-2 kubenswrapper[4776]: I1011 10:44:13.778845 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.778913 master-2 kubenswrapper[4776]: I1011 10:44:13.778899 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6df740-3969-4dd7-8953-2c21514694b8-kube-api-access\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.778913 master-2 kubenswrapper[4776]: I1011 10:44:13.778921 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-var-lock\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.779244 master-2 kubenswrapper[4776]: I1011 10:44:13.779009 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-var-lock\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.779244 master-2 kubenswrapper[4776]: I1011 10:44:13.779084 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-kubelet-dir\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.798011 master-2 kubenswrapper[4776]: I1011 10:44:13.797948 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6df740-3969-4dd7-8953-2c21514694b8-kube-api-access\") pod \"installer-6-master-2\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:13.953594 master-2 kubenswrapper[4776]: I1011 10:44:13.953530 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:14.345925 master-2 kubenswrapper[4776]: I1011 10:44:14.345874 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-2"] Oct 11 10:44:14.857888 master-2 kubenswrapper[4776]: I1011 10:44:14.857824 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-2" event={"ID":"2e6df740-3969-4dd7-8953-2c21514694b8","Type":"ContainerStarted","Data":"dafe48c0553defd2bb14beff21925c74176e23a28e26ccc15fdf50c0af2425e7"} Oct 11 10:44:14.857888 master-2 kubenswrapper[4776]: I1011 10:44:14.857870 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-2" event={"ID":"2e6df740-3969-4dd7-8953-2c21514694b8","Type":"ContainerStarted","Data":"0fdd14a291d71a8f25a23b568e51208656add1332ab642252981b19d9970612d"} Oct 11 10:44:14.884458 master-2 kubenswrapper[4776]: I1011 10:44:14.883234 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-2" podStartSLOduration=1.88321491 podStartE2EDuration="1.88321491s" podCreationTimestamp="2025-10-11 10:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:44:14.882408639 +0000 UTC m=+1089.666835348" watchObservedRunningTime="2025-10-11 10:44:14.88321491 +0000 UTC m=+1089.667641619" Oct 11 10:44:19.627033 master-2 kubenswrapper[4776]: I1011 10:44:19.626974 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-klwcv"] Oct 11 10:44:19.628254 master-2 kubenswrapper[4776]: I1011 10:44:19.627221 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" containerID="cri-o://f9151fc06dbd01664b47f868a0c43e1a9e6b83f5d73cdb1bae7462ef40f38776" gracePeriod=120 Oct 11 10:44:19.628254 master-2 kubenswrapper[4776]: I1011 10:44:19.627365 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://e4993a00e7728dc437a4c6094596c369ce11c26b8ae277d77d9133c67e1933b9" gracePeriod=120 Oct 11 10:44:19.888280 master-2 kubenswrapper[4776]: I1011 10:44:19.888147 4776 generic.go:334] "Generic (PLEG): container finished" podID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerID="e4993a00e7728dc437a4c6094596c369ce11c26b8ae277d77d9133c67e1933b9" exitCode=0 Oct 11 10:44:19.888485 master-2 kubenswrapper[4776]: I1011 10:44:19.888233 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerDied","Data":"e4993a00e7728dc437a4c6094596c369ce11c26b8ae277d77d9133c67e1933b9"} Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: I1011 10:44:20.519257 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:20.519321 master-2 kubenswrapper[4776]: I1011 10:44:20.519314 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: I1011 10:44:25.518534 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:25.518580 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:25.520089 master-2 kubenswrapper[4776]: I1011 10:44:25.518590 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: I1011 10:44:30.522795 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:30.522911 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:30.523905 master-2 kubenswrapper[4776]: I1011 10:44:30.522977 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:30.523905 master-2 kubenswrapper[4776]: I1011 10:44:30.523201 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:44:32.414529 master-2 kubenswrapper[4776]: I1011 10:44:32.414338 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5b846b7bb4-7q7ph" podUID="e0a2e987-f2d6-410a-966a-bd82ab791c00" containerName="console" containerID="cri-o://b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f" gracePeriod=15 Oct 11 10:44:32.902622 master-2 kubenswrapper[4776]: I1011 10:44:32.902577 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b846b7bb4-7q7ph_e0a2e987-f2d6-410a-966a-bd82ab791c00/console/0.log" Oct 11 10:44:32.902844 master-2 kubenswrapper[4776]: I1011 10:44:32.902641 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:44:32.949487 master-2 kubenswrapper[4776]: I1011 10:44:32.949434 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6f9d445f57-z6k82"] Oct 11 10:44:32.950281 master-2 kubenswrapper[4776]: E1011 10:44:32.950220 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a2e987-f2d6-410a-966a-bd82ab791c00" containerName="console" Oct 11 10:44:32.950281 master-2 kubenswrapper[4776]: I1011 10:44:32.950249 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a2e987-f2d6-410a-966a-bd82ab791c00" containerName="console" Oct 11 10:44:32.950638 master-2 kubenswrapper[4776]: I1011 10:44:32.950606 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a2e987-f2d6-410a-966a-bd82ab791c00" containerName="console" Oct 11 10:44:32.951265 master-2 kubenswrapper[4776]: I1011 10:44:32.951233 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:32.959824 master-2 kubenswrapper[4776]: I1011 10:44:32.959775 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f9d445f57-z6k82"] Oct 11 10:44:32.963660 master-2 kubenswrapper[4776]: I1011 10:44:32.963621 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-service-ca\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.963749 master-2 kubenswrapper[4776]: I1011 10:44:32.963690 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-trusted-ca-bundle\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.963749 master-2 kubenswrapper[4776]: I1011 10:44:32.963743 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-config\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.963839 master-2 kubenswrapper[4776]: I1011 10:44:32.963817 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-oauth-config\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.963887 master-2 kubenswrapper[4776]: I1011 10:44:32.963870 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-serving-cert\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.963921 master-2 kubenswrapper[4776]: I1011 10:44:32.963908 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74gcw\" (UniqueName: \"kubernetes.io/projected/e0a2e987-f2d6-410a-966a-bd82ab791c00-kube-api-access-74gcw\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.963976 master-2 kubenswrapper[4776]: I1011 10:44:32.963955 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-oauth-serving-cert\") pod \"e0a2e987-f2d6-410a-966a-bd82ab791c00\" (UID: \"e0a2e987-f2d6-410a-966a-bd82ab791c00\") " Oct 11 10:44:32.964136 master-2 kubenswrapper[4776]: I1011 10:44:32.964108 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-service-ca" (OuterVolumeSpecName: "service-ca") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:32.964414 master-2 kubenswrapper[4776]: I1011 10:44:32.964377 4776 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-service-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:32.965040 master-2 kubenswrapper[4776]: I1011 10:44:32.965007 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-config" (OuterVolumeSpecName: "console-config") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:32.965346 master-2 kubenswrapper[4776]: I1011 10:44:32.965300 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:32.965889 master-2 kubenswrapper[4776]: I1011 10:44:32.965842 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:44:32.967952 master-2 kubenswrapper[4776]: I1011 10:44:32.967894 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:44:32.968921 master-2 kubenswrapper[4776]: I1011 10:44:32.968887 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:44:32.971624 master-2 kubenswrapper[4776]: I1011 10:44:32.971588 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a2e987-f2d6-410a-966a-bd82ab791c00-kube-api-access-74gcw" (OuterVolumeSpecName: "kube-api-access-74gcw") pod "e0a2e987-f2d6-410a-966a-bd82ab791c00" (UID: "e0a2e987-f2d6-410a-966a-bd82ab791c00"). InnerVolumeSpecName "kube-api-access-74gcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:44:32.981837 master-2 kubenswrapper[4776]: I1011 10:44:32.981778 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b846b7bb4-7q7ph_e0a2e987-f2d6-410a-966a-bd82ab791c00/console/0.log" Oct 11 10:44:32.981837 master-2 kubenswrapper[4776]: I1011 10:44:32.981830 4776 generic.go:334] "Generic (PLEG): container finished" podID="e0a2e987-f2d6-410a-966a-bd82ab791c00" containerID="b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f" exitCode=2 Oct 11 10:44:32.982076 master-2 kubenswrapper[4776]: I1011 10:44:32.981862 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b846b7bb4-7q7ph" event={"ID":"e0a2e987-f2d6-410a-966a-bd82ab791c00","Type":"ContainerDied","Data":"b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f"} Oct 11 10:44:32.982076 master-2 kubenswrapper[4776]: I1011 10:44:32.981892 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b846b7bb4-7q7ph" event={"ID":"e0a2e987-f2d6-410a-966a-bd82ab791c00","Type":"ContainerDied","Data":"43085277e27210405939bccfc3c1c615afc3a4069b47f8fc3de8344908605eaa"} Oct 11 10:44:32.982076 master-2 kubenswrapper[4776]: I1011 10:44:32.981909 4776 scope.go:117] "RemoveContainer" containerID="b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f" Oct 11 10:44:32.982076 master-2 kubenswrapper[4776]: I1011 10:44:32.982001 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b846b7bb4-7q7ph" Oct 11 10:44:33.021663 master-2 kubenswrapper[4776]: I1011 10:44:33.021281 4776 scope.go:117] "RemoveContainer" containerID="b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f" Oct 11 10:44:33.021663 master-2 kubenswrapper[4776]: E1011 10:44:33.021649 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f\": container with ID starting with b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f not found: ID does not exist" containerID="b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f" Oct 11 10:44:33.021663 master-2 kubenswrapper[4776]: I1011 10:44:33.021689 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f"} err="failed to get container status \"b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f\": rpc error: code = NotFound desc = could not find container \"b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f\": container with ID starting with b08e19c49c807a0fa3b9d8c5f4f3d2473fa0cf942f968516198fc95a747e243f not found: ID does not exist" Oct 11 10:44:33.035206 master-2 kubenswrapper[4776]: I1011 10:44:33.035137 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b846b7bb4-7q7ph"] Oct 11 10:44:33.038948 master-2 kubenswrapper[4776]: I1011 10:44:33.038900 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5b846b7bb4-7q7ph"] Oct 11 10:44:33.065370 master-2 kubenswrapper[4776]: I1011 10:44:33.065310 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-oauth-serving-cert\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065370 master-2 kubenswrapper[4776]: I1011 10:44:33.065356 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzqm8\" (UniqueName: \"kubernetes.io/projected/9ac259b6-cf42-49b4-b1b7-76cc9072d059-kube-api-access-tzqm8\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065551 master-2 kubenswrapper[4776]: I1011 10:44:33.065373 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-serving-cert\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065551 master-2 kubenswrapper[4776]: I1011 10:44:33.065442 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-service-ca\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065551 master-2 kubenswrapper[4776]: I1011 10:44:33.065473 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-oauth-config\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065551 master-2 kubenswrapper[4776]: I1011 10:44:33.065510 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-config\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065551 master-2 kubenswrapper[4776]: I1011 10:44:33.065530 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-trusted-ca-bundle\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.065839 master-2 kubenswrapper[4776]: I1011 10:44:33.065685 4776 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:33.065839 master-2 kubenswrapper[4776]: I1011 10:44:33.065697 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74gcw\" (UniqueName: \"kubernetes.io/projected/e0a2e987-f2d6-410a-966a-bd82ab791c00-kube-api-access-74gcw\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:33.065839 master-2 kubenswrapper[4776]: I1011 10:44:33.065707 4776 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-oauth-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:33.065839 master-2 kubenswrapper[4776]: I1011 10:44:33.065715 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:33.065839 master-2 kubenswrapper[4776]: I1011 10:44:33.065724 4776 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:33.065839 master-2 kubenswrapper[4776]: I1011 10:44:33.065732 4776 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0a2e987-f2d6-410a-966a-bd82ab791c00-console-oauth-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:33.166958 master-2 kubenswrapper[4776]: I1011 10:44:33.166884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-oauth-config\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.167702 master-2 kubenswrapper[4776]: I1011 10:44:33.167648 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-config\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.167774 master-2 kubenswrapper[4776]: I1011 10:44:33.167744 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-trusted-ca-bundle\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.167897 master-2 kubenswrapper[4776]: I1011 10:44:33.167867 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-oauth-serving-cert\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.167937 master-2 kubenswrapper[4776]: I1011 10:44:33.167901 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-serving-cert\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.167937 master-2 kubenswrapper[4776]: I1011 10:44:33.167926 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzqm8\" (UniqueName: \"kubernetes.io/projected/9ac259b6-cf42-49b4-b1b7-76cc9072d059-kube-api-access-tzqm8\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.167999 master-2 kubenswrapper[4776]: I1011 10:44:33.167962 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-service-ca\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.168556 master-2 kubenswrapper[4776]: I1011 10:44:33.168516 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-config\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.168992 master-2 kubenswrapper[4776]: I1011 10:44:33.168952 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-oauth-serving-cert\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.168992 master-2 kubenswrapper[4776]: I1011 10:44:33.168983 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-service-ca\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.169408 master-2 kubenswrapper[4776]: I1011 10:44:33.169371 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-trusted-ca-bundle\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.170376 master-2 kubenswrapper[4776]: I1011 10:44:33.170341 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-oauth-config\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.171132 master-2 kubenswrapper[4776]: I1011 10:44:33.171089 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-serving-cert\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.191838 master-2 kubenswrapper[4776]: I1011 10:44:33.191787 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzqm8\" (UniqueName: \"kubernetes.io/projected/9ac259b6-cf42-49b4-b1b7-76cc9072d059-kube-api-access-tzqm8\") pod \"console-6f9d445f57-z6k82\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.277996 master-2 kubenswrapper[4776]: I1011 10:44:33.277944 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:33.696287 master-2 kubenswrapper[4776]: I1011 10:44:33.696226 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f9d445f57-z6k82"] Oct 11 10:44:33.700298 master-2 kubenswrapper[4776]: W1011 10:44:33.700244 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac259b6_cf42_49b4_b1b7_76cc9072d059.slice/crio-88f0b704732f3071e12ed605ea3d2e3766c911ff93df7e212bf6bc543d745d21 WatchSource:0}: Error finding container 88f0b704732f3071e12ed605ea3d2e3766c911ff93df7e212bf6bc543d745d21: Status 404 returned error can't find the container with id 88f0b704732f3071e12ed605ea3d2e3766c911ff93df7e212bf6bc543d745d21 Oct 11 10:44:33.992316 master-2 kubenswrapper[4776]: I1011 10:44:33.992144 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f9d445f57-z6k82" event={"ID":"9ac259b6-cf42-49b4-b1b7-76cc9072d059","Type":"ContainerStarted","Data":"64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c"} Oct 11 10:44:33.992316 master-2 kubenswrapper[4776]: I1011 10:44:33.992238 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f9d445f57-z6k82" event={"ID":"9ac259b6-cf42-49b4-b1b7-76cc9072d059","Type":"ContainerStarted","Data":"88f0b704732f3071e12ed605ea3d2e3766c911ff93df7e212bf6bc543d745d21"} Oct 11 10:44:34.023440 master-2 kubenswrapper[4776]: I1011 10:44:34.023313 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6f9d445f57-z6k82" podStartSLOduration=27.023291146 podStartE2EDuration="27.023291146s" podCreationTimestamp="2025-10-11 10:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:44:34.023237174 +0000 UTC m=+1108.807663933" watchObservedRunningTime="2025-10-11 10:44:34.023291146 +0000 UTC m=+1108.807717855" Oct 11 10:44:34.072201 master-2 kubenswrapper[4776]: I1011 10:44:34.072103 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a2e987-f2d6-410a-966a-bd82ab791c00" path="/var/lib/kubelet/pods/e0a2e987-f2d6-410a-966a-bd82ab791c00/volumes" Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: I1011 10:44:35.521569 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:35.521623 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:35.522702 master-2 kubenswrapper[4776]: I1011 10:44:35.521638 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: I1011 10:44:40.520926 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:40.520997 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:40.522106 master-2 kubenswrapper[4776]: I1011 10:44:40.521006 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:43.278323 master-2 kubenswrapper[4776]: I1011 10:44:43.278267 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:43.278944 master-2 kubenswrapper[4776]: I1011 10:44:43.278413 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:43.285166 master-2 kubenswrapper[4776]: I1011 10:44:43.285131 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:44.071237 master-2 kubenswrapper[4776]: I1011 10:44:44.071147 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: I1011 10:44:45.523304 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:45.523420 master-2 kubenswrapper[4776]: I1011 10:44:45.523410 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: I1011 10:44:50.519493 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:50.519561 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:50.520899 master-2 kubenswrapper[4776]: I1011 10:44:50.519554 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:52.364575 master-2 kubenswrapper[4776]: I1011 10:44:52.364511 4776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:44:52.365168 master-2 kubenswrapper[4776]: I1011 10:44:52.364828 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver" containerID="cri-o://ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da" gracePeriod=135 Oct 11 10:44:52.365168 master-2 kubenswrapper[4776]: I1011 10:44:52.364887 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9" gracePeriod=135 Oct 11 10:44:52.365168 master-2 kubenswrapper[4776]: I1011 10:44:52.364949 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6" gracePeriod=135 Oct 11 10:44:52.365168 master-2 kubenswrapper[4776]: I1011 10:44:52.364975 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510" gracePeriod=135 Oct 11 10:44:52.365168 master-2 kubenswrapper[4776]: I1011 10:44:52.365101 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57" gracePeriod=135 Oct 11 10:44:52.366976 master-2 kubenswrapper[4776]: I1011 10:44:52.366942 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:44:52.367371 master-2 kubenswrapper[4776]: E1011 10:44:52.367343 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver" Oct 11 10:44:52.367371 master-2 kubenswrapper[4776]: I1011 10:44:52.367363 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: E1011 10:44:52.367384 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: I1011 10:44:52.367393 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: E1011 10:44:52.367432 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-insecure-readyz" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: I1011 10:44:52.367442 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-insecure-readyz" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: E1011 10:44:52.367455 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9041570beb5002e8da158e70e12f0c16" containerName="setup" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: I1011 10:44:52.367463 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9041570beb5002e8da158e70e12f0c16" containerName="setup" Oct 11 10:44:52.367475 master-2 kubenswrapper[4776]: E1011 10:44:52.367475 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-syncer" Oct 11 10:44:52.367801 master-2 kubenswrapper[4776]: I1011 10:44:52.367482 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-syncer" Oct 11 10:44:52.367801 master-2 kubenswrapper[4776]: E1011 10:44:52.367525 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-check-endpoints" Oct 11 10:44:52.367801 master-2 kubenswrapper[4776]: I1011 10:44:52.367534 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-check-endpoints" Oct 11 10:44:52.367801 master-2 kubenswrapper[4776]: I1011 10:44:52.367732 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-regeneration-controller" Oct 11 10:44:52.367801 master-2 kubenswrapper[4776]: I1011 10:44:52.367779 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-insecure-readyz" Oct 11 10:44:52.367801 master-2 kubenswrapper[4776]: I1011 10:44:52.367799 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver" Oct 11 10:44:52.368055 master-2 kubenswrapper[4776]: I1011 10:44:52.367808 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-cert-syncer" Oct 11 10:44:52.368055 master-2 kubenswrapper[4776]: I1011 10:44:52.367843 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9041570beb5002e8da158e70e12f0c16" containerName="kube-apiserver-check-endpoints" Oct 11 10:44:52.452257 master-2 kubenswrapper[4776]: I1011 10:44:52.452190 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-cert-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.452494 master-2 kubenswrapper[4776]: I1011 10:44:52.452275 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-resource-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.452494 master-2 kubenswrapper[4776]: I1011 10:44:52.452336 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-audit-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.553568 master-2 kubenswrapper[4776]: I1011 10:44:52.553264 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-audit-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.553568 master-2 kubenswrapper[4776]: I1011 10:44:52.553352 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-cert-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.553568 master-2 kubenswrapper[4776]: I1011 10:44:52.553407 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-resource-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.553568 master-2 kubenswrapper[4776]: I1011 10:44:52.553507 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-resource-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.553568 master-2 kubenswrapper[4776]: I1011 10:44:52.553555 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-audit-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:52.553568 master-2 kubenswrapper[4776]: I1011 10:44:52.553584 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/978811670a28b21932e323b181b31435-cert-dir\") pod \"kube-apiserver-master-2\" (UID: \"978811670a28b21932e323b181b31435\") " pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:44:53.138842 master-2 kubenswrapper[4776]: I1011 10:44:53.138785 4776 generic.go:334] "Generic (PLEG): container finished" podID="2e6df740-3969-4dd7-8953-2c21514694b8" containerID="dafe48c0553defd2bb14beff21925c74176e23a28e26ccc15fdf50c0af2425e7" exitCode=0 Oct 11 10:44:53.139042 master-2 kubenswrapper[4776]: I1011 10:44:53.138853 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-2" event={"ID":"2e6df740-3969-4dd7-8953-2c21514694b8","Type":"ContainerDied","Data":"dafe48c0553defd2bb14beff21925c74176e23a28e26ccc15fdf50c0af2425e7"} Oct 11 10:44:53.142157 master-2 kubenswrapper[4776]: I1011 10:44:53.142122 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-2_9041570beb5002e8da158e70e12f0c16/kube-apiserver-cert-syncer/0.log" Oct 11 10:44:53.142873 master-2 kubenswrapper[4776]: I1011 10:44:53.142844 4776 generic.go:334] "Generic (PLEG): container finished" podID="9041570beb5002e8da158e70e12f0c16" containerID="b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57" exitCode=0 Oct 11 10:44:53.142873 master-2 kubenswrapper[4776]: I1011 10:44:53.142872 4776 generic.go:334] "Generic (PLEG): container finished" podID="9041570beb5002e8da158e70e12f0c16" containerID="784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6" exitCode=0 Oct 11 10:44:53.142970 master-2 kubenswrapper[4776]: I1011 10:44:53.142882 4776 generic.go:334] "Generic (PLEG): container finished" podID="9041570beb5002e8da158e70e12f0c16" containerID="f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9" exitCode=0 Oct 11 10:44:53.142970 master-2 kubenswrapper[4776]: I1011 10:44:53.142894 4776 generic.go:334] "Generic (PLEG): container finished" podID="9041570beb5002e8da158e70e12f0c16" containerID="e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510" exitCode=2 Oct 11 10:44:53.166489 master-2 kubenswrapper[4776]: I1011 10:44:53.166397 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-2" oldPodUID="9041570beb5002e8da158e70e12f0c16" podUID="978811670a28b21932e323b181b31435" Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: I1011 10:44:53.308576 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:53.308635 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:53.311325 master-2 kubenswrapper[4776]: I1011 10:44:53.308660 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:54.547917 master-2 kubenswrapper[4776]: I1011 10:44:54.547857 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:54.588953 master-2 kubenswrapper[4776]: I1011 10:44:54.588884 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-var-lock\") pod \"2e6df740-3969-4dd7-8953-2c21514694b8\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " Oct 11 10:44:54.588953 master-2 kubenswrapper[4776]: I1011 10:44:54.588963 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6df740-3969-4dd7-8953-2c21514694b8-kube-api-access\") pod \"2e6df740-3969-4dd7-8953-2c21514694b8\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " Oct 11 10:44:54.589198 master-2 kubenswrapper[4776]: I1011 10:44:54.588996 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-kubelet-dir\") pod \"2e6df740-3969-4dd7-8953-2c21514694b8\" (UID: \"2e6df740-3969-4dd7-8953-2c21514694b8\") " Oct 11 10:44:54.589198 master-2 kubenswrapper[4776]: I1011 10:44:54.589026 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-var-lock" (OuterVolumeSpecName: "var-lock") pod "2e6df740-3969-4dd7-8953-2c21514694b8" (UID: "2e6df740-3969-4dd7-8953-2c21514694b8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:44:54.589264 master-2 kubenswrapper[4776]: I1011 10:44:54.589226 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:54.589264 master-2 kubenswrapper[4776]: I1011 10:44:54.589253 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2e6df740-3969-4dd7-8953-2c21514694b8" (UID: "2e6df740-3969-4dd7-8953-2c21514694b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:44:54.598119 master-2 kubenswrapper[4776]: I1011 10:44:54.598065 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6df740-3969-4dd7-8953-2c21514694b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2e6df740-3969-4dd7-8953-2c21514694b8" (UID: "2e6df740-3969-4dd7-8953-2c21514694b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:44:54.691056 master-2 kubenswrapper[4776]: I1011 10:44:54.690977 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6df740-3969-4dd7-8953-2c21514694b8-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:54.691056 master-2 kubenswrapper[4776]: I1011 10:44:54.691024 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6df740-3969-4dd7-8953-2c21514694b8-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:44:55.158764 master-2 kubenswrapper[4776]: I1011 10:44:55.158633 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-2" event={"ID":"2e6df740-3969-4dd7-8953-2c21514694b8","Type":"ContainerDied","Data":"0fdd14a291d71a8f25a23b568e51208656add1332ab642252981b19d9970612d"} Oct 11 10:44:55.158764 master-2 kubenswrapper[4776]: I1011 10:44:55.158742 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fdd14a291d71a8f25a23b568e51208656add1332ab642252981b19d9970612d" Oct 11 10:44:55.158764 master-2 kubenswrapper[4776]: I1011 10:44:55.158755 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-2" Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: I1011 10:44:55.521394 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:55.521594 master-2 kubenswrapper[4776]: I1011 10:44:55.521512 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: I1011 10:44:58.320783 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:44:58.320857 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:44:58.322420 master-2 kubenswrapper[4776]: I1011 10:44:58.320855 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:44:59.935837 master-2 kubenswrapper[4776]: I1011 10:44:59.935729 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-10-master-2"] Oct 11 10:44:59.937202 master-2 kubenswrapper[4776]: E1011 10:44:59.936005 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6df740-3969-4dd7-8953-2c21514694b8" containerName="installer" Oct 11 10:44:59.937202 master-2 kubenswrapper[4776]: I1011 10:44:59.936023 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6df740-3969-4dd7-8953-2c21514694b8" containerName="installer" Oct 11 10:44:59.937202 master-2 kubenswrapper[4776]: I1011 10:44:59.936135 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6df740-3969-4dd7-8953-2c21514694b8" containerName="installer" Oct 11 10:44:59.937202 master-2 kubenswrapper[4776]: I1011 10:44:59.937080 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-2" Oct 11 10:44:59.941964 master-2 kubenswrapper[4776]: I1011 10:44:59.940919 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xbqxb" Oct 11 10:44:59.952302 master-2 kubenswrapper[4776]: I1011 10:44:59.952239 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-10-master-2"] Oct 11 10:45:00.060745 master-2 kubenswrapper[4776]: I1011 10:45:00.060639 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56e683e1-6c74-4998-ac94-05f58a65965f-kube-api-access\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.060745 master-2 kubenswrapper[4776]: I1011 10:45:00.060737 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-kubelet-dir\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.061002 master-2 kubenswrapper[4776]: I1011 10:45:00.060910 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-var-lock\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.163117 master-2 kubenswrapper[4776]: I1011 10:45:00.163062 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56e683e1-6c74-4998-ac94-05f58a65965f-kube-api-access\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.163699 master-2 kubenswrapper[4776]: I1011 10:45:00.163647 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-kubelet-dir\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.163866 master-2 kubenswrapper[4776]: I1011 10:45:00.163845 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-var-lock\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.164197 master-2 kubenswrapper[4776]: I1011 10:45:00.163761 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-kubelet-dir\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.164197 master-2 kubenswrapper[4776]: I1011 10:45:00.163976 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-var-lock\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.189732 master-2 kubenswrapper[4776]: I1011 10:45:00.189586 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56e683e1-6c74-4998-ac94-05f58a65965f-kube-api-access\") pod \"installer-10-master-2\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.217404 master-2 kubenswrapper[4776]: I1011 10:45:00.217364 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv"] Oct 11 10:45:00.218836 master-2 kubenswrapper[4776]: I1011 10:45:00.218817 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.221299 master-2 kubenswrapper[4776]: I1011 10:45:00.221263 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-hbjq2" Oct 11 10:45:00.221638 master-2 kubenswrapper[4776]: I1011 10:45:00.221541 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 10:45:00.229793 master-2 kubenswrapper[4776]: I1011 10:45:00.229551 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv"] Oct 11 10:45:00.266006 master-2 kubenswrapper[4776]: I1011 10:45:00.265961 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4gqp\" (UniqueName: \"kubernetes.io/projected/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-kube-api-access-j4gqp\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.266273 master-2 kubenswrapper[4776]: I1011 10:45:00.266259 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-config-volume\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.266360 master-2 kubenswrapper[4776]: I1011 10:45:00.266348 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-secret-volume\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.279615 master-2 kubenswrapper[4776]: I1011 10:45:00.279585 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:00.368234 master-2 kubenswrapper[4776]: I1011 10:45:00.368133 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4gqp\" (UniqueName: \"kubernetes.io/projected/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-kube-api-access-j4gqp\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.368234 master-2 kubenswrapper[4776]: I1011 10:45:00.368190 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-config-volume\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.368234 master-2 kubenswrapper[4776]: I1011 10:45:00.368207 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-secret-volume\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.369403 master-2 kubenswrapper[4776]: I1011 10:45:00.369360 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-config-volume\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.383585 master-2 kubenswrapper[4776]: I1011 10:45:00.383517 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-secret-volume\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.391027 master-2 kubenswrapper[4776]: I1011 10:45:00.390829 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4gqp\" (UniqueName: \"kubernetes.io/projected/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-kube-api-access-j4gqp\") pod \"collect-profiles-29336325-mh4sv\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: I1011 10:45:00.520598 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:00.520820 master-2 kubenswrapper[4776]: I1011 10:45:00.520780 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:00.540780 master-2 kubenswrapper[4776]: I1011 10:45:00.540074 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:00.666841 master-2 kubenswrapper[4776]: I1011 10:45:00.666763 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-10-master-2"] Oct 11 10:45:00.674858 master-2 kubenswrapper[4776]: W1011 10:45:00.674798 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod56e683e1_6c74_4998_ac94_05f58a65965f.slice/crio-a100df13f60f80ad4bc323ad2221d41c81f5ea6f0551eaf971a5f40bf6de7a88 WatchSource:0}: Error finding container a100df13f60f80ad4bc323ad2221d41c81f5ea6f0551eaf971a5f40bf6de7a88: Status 404 returned error can't find the container with id a100df13f60f80ad4bc323ad2221d41c81f5ea6f0551eaf971a5f40bf6de7a88 Oct 11 10:45:00.981145 master-2 kubenswrapper[4776]: I1011 10:45:00.981092 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv"] Oct 11 10:45:01.197071 master-2 kubenswrapper[4776]: I1011 10:45:01.197018 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-2" event={"ID":"56e683e1-6c74-4998-ac94-05f58a65965f","Type":"ContainerStarted","Data":"903137cd4045917d2201001cea3f552800cf2d073b4309b1386b4ec5c2d61b48"} Oct 11 10:45:01.197071 master-2 kubenswrapper[4776]: I1011 10:45:01.197075 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-2" event={"ID":"56e683e1-6c74-4998-ac94-05f58a65965f","Type":"ContainerStarted","Data":"a100df13f60f80ad4bc323ad2221d41c81f5ea6f0551eaf971a5f40bf6de7a88"} Oct 11 10:45:01.199027 master-2 kubenswrapper[4776]: I1011 10:45:01.198969 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" event={"ID":"a4aea0e1-d6c8-4542-85c7-e46b945d61a0","Type":"ContainerStarted","Data":"84c5dbd5af0fe30b5900ca449a4f0411c4cff8ceb60f39636cf23f6147444fb3"} Oct 11 10:45:01.199027 master-2 kubenswrapper[4776]: I1011 10:45:01.199009 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" event={"ID":"a4aea0e1-d6c8-4542-85c7-e46b945d61a0","Type":"ContainerStarted","Data":"7d380218deaac4977ed8862e6e0db8e065a96b5a1de4b5ed4bb8fa216797842d"} Oct 11 10:45:01.219531 master-2 kubenswrapper[4776]: I1011 10:45:01.219363 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-10-master-2" podStartSLOduration=2.219343671 podStartE2EDuration="2.219343671s" podCreationTimestamp="2025-10-11 10:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:45:01.2163282 +0000 UTC m=+1136.000754909" watchObservedRunningTime="2025-10-11 10:45:01.219343671 +0000 UTC m=+1136.003770380" Oct 11 10:45:01.242524 master-2 kubenswrapper[4776]: I1011 10:45:01.242427 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" podStartSLOduration=1.242412165 podStartE2EDuration="1.242412165s" podCreationTimestamp="2025-10-11 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:45:01.238746796 +0000 UTC m=+1136.023173505" watchObservedRunningTime="2025-10-11 10:45:01.242412165 +0000 UTC m=+1136.026838874" Oct 11 10:45:01.803615 master-2 kubenswrapper[4776]: I1011 10:45:01.802261 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7lq47"] Oct 11 10:45:01.810462 master-2 kubenswrapper[4776]: I1011 10:45:01.808461 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.821885 master-2 kubenswrapper[4776]: I1011 10:45:01.821840 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lq47"] Oct 11 10:45:01.893876 master-2 kubenswrapper[4776]: I1011 10:45:01.893762 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-catalog-content\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.894179 master-2 kubenswrapper[4776]: I1011 10:45:01.893891 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-utilities\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.894179 master-2 kubenswrapper[4776]: I1011 10:45:01.893914 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/89013c68-6873-4f47-bd39-d7eae57cd89b-kube-api-access-jxmvj\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.995129 master-2 kubenswrapper[4776]: I1011 10:45:01.995052 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-utilities\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.995129 master-2 kubenswrapper[4776]: I1011 10:45:01.995109 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/89013c68-6873-4f47-bd39-d7eae57cd89b-kube-api-access-jxmvj\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.995129 master-2 kubenswrapper[4776]: I1011 10:45:01.995169 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-catalog-content\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.996314 master-2 kubenswrapper[4776]: I1011 10:45:01.995628 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-utilities\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:01.996314 master-2 kubenswrapper[4776]: I1011 10:45:01.995702 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-catalog-content\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:02.020421 master-2 kubenswrapper[4776]: I1011 10:45:02.020374 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/89013c68-6873-4f47-bd39-d7eae57cd89b-kube-api-access-jxmvj\") pod \"certified-operators-7lq47\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:02.158086 master-2 kubenswrapper[4776]: I1011 10:45:02.158020 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:02.209504 master-2 kubenswrapper[4776]: I1011 10:45:02.208722 4776 generic.go:334] "Generic (PLEG): container finished" podID="a4aea0e1-d6c8-4542-85c7-e46b945d61a0" containerID="84c5dbd5af0fe30b5900ca449a4f0411c4cff8ceb60f39636cf23f6147444fb3" exitCode=0 Oct 11 10:45:02.209752 master-2 kubenswrapper[4776]: I1011 10:45:02.209532 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" event={"ID":"a4aea0e1-d6c8-4542-85c7-e46b945d61a0","Type":"ContainerDied","Data":"84c5dbd5af0fe30b5900ca449a4f0411c4cff8ceb60f39636cf23f6147444fb3"} Oct 11 10:45:02.591045 master-2 kubenswrapper[4776]: I1011 10:45:02.590976 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lq47"] Oct 11 10:45:02.594754 master-2 kubenswrapper[4776]: W1011 10:45:02.594700 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89013c68_6873_4f47_bd39_d7eae57cd89b.slice/crio-94c86deb0008b919de4c04f201973916bff0475350cca9c14ee8e5cc04b4bc3c WatchSource:0}: Error finding container 94c86deb0008b919de4c04f201973916bff0475350cca9c14ee8e5cc04b4bc3c: Status 404 returned error can't find the container with id 94c86deb0008b919de4c04f201973916bff0475350cca9c14ee8e5cc04b4bc3c Oct 11 10:45:03.214453 master-2 kubenswrapper[4776]: I1011 10:45:03.214396 4776 generic.go:334] "Generic (PLEG): container finished" podID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerID="97798087352e6ac819c8a9870fc9a9bbf2fa235ec702671d629829d200039a5e" exitCode=0 Oct 11 10:45:03.214453 master-2 kubenswrapper[4776]: I1011 10:45:03.214444 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lq47" event={"ID":"89013c68-6873-4f47-bd39-d7eae57cd89b","Type":"ContainerDied","Data":"97798087352e6ac819c8a9870fc9a9bbf2fa235ec702671d629829d200039a5e"} Oct 11 10:45:03.214453 master-2 kubenswrapper[4776]: I1011 10:45:03.214493 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lq47" event={"ID":"89013c68-6873-4f47-bd39-d7eae57cd89b","Type":"ContainerStarted","Data":"94c86deb0008b919de4c04f201973916bff0475350cca9c14ee8e5cc04b4bc3c"} Oct 11 10:45:03.215938 master-2 kubenswrapper[4776]: I1011 10:45:03.215663 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: I1011 10:45:03.307157 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:03.307214 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:03.308414 master-2 kubenswrapper[4776]: I1011 10:45:03.307221 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:03.308414 master-2 kubenswrapper[4776]: I1011 10:45:03.307312 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: I1011 10:45:03.312117 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:03.312153 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:03.313455 master-2 kubenswrapper[4776]: I1011 10:45:03.312171 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:03.544092 master-2 kubenswrapper[4776]: I1011 10:45:03.544048 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:03.617667 master-2 kubenswrapper[4776]: I1011 10:45:03.617584 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-config-volume\") pod \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " Oct 11 10:45:03.617881 master-2 kubenswrapper[4776]: I1011 10:45:03.617692 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-secret-volume\") pod \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " Oct 11 10:45:03.617881 master-2 kubenswrapper[4776]: I1011 10:45:03.617763 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4gqp\" (UniqueName: \"kubernetes.io/projected/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-kube-api-access-j4gqp\") pod \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\" (UID: \"a4aea0e1-d6c8-4542-85c7-e46b945d61a0\") " Oct 11 10:45:03.618093 master-2 kubenswrapper[4776]: I1011 10:45:03.618053 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4aea0e1-d6c8-4542-85c7-e46b945d61a0" (UID: "a4aea0e1-d6c8-4542-85c7-e46b945d61a0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:45:03.620902 master-2 kubenswrapper[4776]: I1011 10:45:03.620873 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4aea0e1-d6c8-4542-85c7-e46b945d61a0" (UID: "a4aea0e1-d6c8-4542-85c7-e46b945d61a0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:45:03.621028 master-2 kubenswrapper[4776]: I1011 10:45:03.620988 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-kube-api-access-j4gqp" (OuterVolumeSpecName: "kube-api-access-j4gqp") pod "a4aea0e1-d6c8-4542-85c7-e46b945d61a0" (UID: "a4aea0e1-d6c8-4542-85c7-e46b945d61a0"). InnerVolumeSpecName "kube-api-access-j4gqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:45:03.719121 master-2 kubenswrapper[4776]: I1011 10:45:03.719054 4776 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-config-volume\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:03.719121 master-2 kubenswrapper[4776]: I1011 10:45:03.719096 4776 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-secret-volume\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:03.719121 master-2 kubenswrapper[4776]: I1011 10:45:03.719107 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4gqp\" (UniqueName: \"kubernetes.io/projected/a4aea0e1-d6c8-4542-85c7-e46b945d61a0-kube-api-access-j4gqp\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:04.221488 master-2 kubenswrapper[4776]: I1011 10:45:04.221421 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" event={"ID":"a4aea0e1-d6c8-4542-85c7-e46b945d61a0","Type":"ContainerDied","Data":"7d380218deaac4977ed8862e6e0db8e065a96b5a1de4b5ed4bb8fa216797842d"} Oct 11 10:45:04.221488 master-2 kubenswrapper[4776]: I1011 10:45:04.221479 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d380218deaac4977ed8862e6e0db8e065a96b5a1de4b5ed4bb8fa216797842d" Oct 11 10:45:04.221488 master-2 kubenswrapper[4776]: I1011 10:45:04.221435 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv" Oct 11 10:45:04.223776 master-2 kubenswrapper[4776]: I1011 10:45:04.223745 4776 generic.go:334] "Generic (PLEG): container finished" podID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerID="d922e24709975d381106a6dcea6ba30e0bc73a5b106d3e63ed79705c8f65ab22" exitCode=0 Oct 11 10:45:04.223776 master-2 kubenswrapper[4776]: I1011 10:45:04.223771 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lq47" event={"ID":"89013c68-6873-4f47-bd39-d7eae57cd89b","Type":"ContainerDied","Data":"d922e24709975d381106a6dcea6ba30e0bc73a5b106d3e63ed79705c8f65ab22"} Oct 11 10:45:05.238135 master-2 kubenswrapper[4776]: I1011 10:45:05.238060 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lq47" event={"ID":"89013c68-6873-4f47-bd39-d7eae57cd89b","Type":"ContainerStarted","Data":"77d005ff33d4c70c9570ec36ad29c9dd57a1f004b98d5ede79869adaab01feb6"} Oct 11 10:45:05.265765 master-2 kubenswrapper[4776]: I1011 10:45:05.265665 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7lq47" podStartSLOduration=2.889387984 podStartE2EDuration="4.26564512s" podCreationTimestamp="2025-10-11 10:45:01 +0000 UTC" firstStartedPulling="2025-10-11 10:45:03.215606653 +0000 UTC m=+1138.000033362" lastFinishedPulling="2025-10-11 10:45:04.591863779 +0000 UTC m=+1139.376290498" observedRunningTime="2025-10-11 10:45:05.26418257 +0000 UTC m=+1140.048609299" watchObservedRunningTime="2025-10-11 10:45:05.26564512 +0000 UTC m=+1140.050071829" Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: I1011 10:45:05.520766 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]etcd excluded: ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]etcd-readiness excluded: ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/max-in-flight-filter ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectcache ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startinformers ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:05.520891 master-2 kubenswrapper[4776]: I1011 10:45:05.520867 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: I1011 10:45:08.306888 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:08.307101 master-2 kubenswrapper[4776]: I1011 10:45:08.306983 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:10.515888 master-2 kubenswrapper[4776]: I1011 10:45:10.515777 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:10.515888 master-2 kubenswrapper[4776]: I1011 10:45:10.515868 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:12.158656 master-2 kubenswrapper[4776]: I1011 10:45:12.158585 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:12.159163 master-2 kubenswrapper[4776]: I1011 10:45:12.158844 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:12.228712 master-2 kubenswrapper[4776]: I1011 10:45:12.228613 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:12.376364 master-2 kubenswrapper[4776]: I1011 10:45:12.376296 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: I1011 10:45:13.310430 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:13.310497 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:13.312389 master-2 kubenswrapper[4776]: I1011 10:45:13.310495 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:15.148256 master-2 kubenswrapper[4776]: I1011 10:45:15.148144 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lq47"] Oct 11 10:45:15.149400 master-2 kubenswrapper[4776]: I1011 10:45:15.148503 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7lq47" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="registry-server" containerID="cri-o://77d005ff33d4c70c9570ec36ad29c9dd57a1f004b98d5ede79869adaab01feb6" gracePeriod=2 Oct 11 10:45:15.329840 master-2 kubenswrapper[4776]: I1011 10:45:15.329751 4776 generic.go:334] "Generic (PLEG): container finished" podID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerID="77d005ff33d4c70c9570ec36ad29c9dd57a1f004b98d5ede79869adaab01feb6" exitCode=0 Oct 11 10:45:15.329840 master-2 kubenswrapper[4776]: I1011 10:45:15.329820 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lq47" event={"ID":"89013c68-6873-4f47-bd39-d7eae57cd89b","Type":"ContainerDied","Data":"77d005ff33d4c70c9570ec36ad29c9dd57a1f004b98d5ede79869adaab01feb6"} Oct 11 10:45:15.515600 master-2 kubenswrapper[4776]: I1011 10:45:15.515474 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:15.515957 master-2 kubenswrapper[4776]: I1011 10:45:15.515580 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:15.662352 master-2 kubenswrapper[4776]: I1011 10:45:15.662299 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:15.714210 master-2 kubenswrapper[4776]: I1011 10:45:15.714043 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-utilities\") pod \"89013c68-6873-4f47-bd39-d7eae57cd89b\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " Oct 11 10:45:15.714484 master-2 kubenswrapper[4776]: I1011 10:45:15.714229 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-catalog-content\") pod \"89013c68-6873-4f47-bd39-d7eae57cd89b\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " Oct 11 10:45:15.714484 master-2 kubenswrapper[4776]: I1011 10:45:15.714266 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/89013c68-6873-4f47-bd39-d7eae57cd89b-kube-api-access-jxmvj\") pod \"89013c68-6873-4f47-bd39-d7eae57cd89b\" (UID: \"89013c68-6873-4f47-bd39-d7eae57cd89b\") " Oct 11 10:45:15.716537 master-2 kubenswrapper[4776]: I1011 10:45:15.716472 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-utilities" (OuterVolumeSpecName: "utilities") pod "89013c68-6873-4f47-bd39-d7eae57cd89b" (UID: "89013c68-6873-4f47-bd39-d7eae57cd89b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:45:15.735268 master-2 kubenswrapper[4776]: I1011 10:45:15.734838 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89013c68-6873-4f47-bd39-d7eae57cd89b-kube-api-access-jxmvj" (OuterVolumeSpecName: "kube-api-access-jxmvj") pod "89013c68-6873-4f47-bd39-d7eae57cd89b" (UID: "89013c68-6873-4f47-bd39-d7eae57cd89b"). InnerVolumeSpecName "kube-api-access-jxmvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:45:15.764592 master-2 kubenswrapper[4776]: I1011 10:45:15.764510 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89013c68-6873-4f47-bd39-d7eae57cd89b" (UID: "89013c68-6873-4f47-bd39-d7eae57cd89b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:45:15.815851 master-2 kubenswrapper[4776]: I1011 10:45:15.815782 4776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-catalog-content\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:15.815851 master-2 kubenswrapper[4776]: I1011 10:45:15.815825 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxmvj\" (UniqueName: \"kubernetes.io/projected/89013c68-6873-4f47-bd39-d7eae57cd89b-kube-api-access-jxmvj\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:15.815851 master-2 kubenswrapper[4776]: I1011 10:45:15.815836 4776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89013c68-6873-4f47-bd39-d7eae57cd89b-utilities\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:16.342941 master-2 kubenswrapper[4776]: I1011 10:45:16.342871 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lq47" event={"ID":"89013c68-6873-4f47-bd39-d7eae57cd89b","Type":"ContainerDied","Data":"94c86deb0008b919de4c04f201973916bff0475350cca9c14ee8e5cc04b4bc3c"} Oct 11 10:45:16.342941 master-2 kubenswrapper[4776]: I1011 10:45:16.342914 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lq47" Oct 11 10:45:16.343523 master-2 kubenswrapper[4776]: I1011 10:45:16.342955 4776 scope.go:117] "RemoveContainer" containerID="77d005ff33d4c70c9570ec36ad29c9dd57a1f004b98d5ede79869adaab01feb6" Oct 11 10:45:16.365444 master-2 kubenswrapper[4776]: I1011 10:45:16.365413 4776 scope.go:117] "RemoveContainer" containerID="d922e24709975d381106a6dcea6ba30e0bc73a5b106d3e63ed79705c8f65ab22" Oct 11 10:45:16.380056 master-2 kubenswrapper[4776]: I1011 10:45:16.379993 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lq47"] Oct 11 10:45:16.386830 master-2 kubenswrapper[4776]: I1011 10:45:16.386763 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7lq47"] Oct 11 10:45:16.395917 master-2 kubenswrapper[4776]: I1011 10:45:16.395732 4776 scope.go:117] "RemoveContainer" containerID="97798087352e6ac819c8a9870fc9a9bbf2fa235ec702671d629829d200039a5e" Oct 11 10:45:18.070592 master-2 kubenswrapper[4776]: I1011 10:45:18.070516 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" path="/var/lib/kubelet/pods/89013c68-6873-4f47-bd39-d7eae57cd89b/volumes" Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: I1011 10:45:18.307556 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:18.307631 master-2 kubenswrapper[4776]: I1011 10:45:18.307632 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:20.515907 master-2 kubenswrapper[4776]: I1011 10:45:20.515839 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:20.516501 master-2 kubenswrapper[4776]: I1011 10:45:20.515911 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: I1011 10:45:23.308582 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:23.308669 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:23.310577 master-2 kubenswrapper[4776]: I1011 10:45:23.308698 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:25.516712 master-2 kubenswrapper[4776]: I1011 10:45:25.516580 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:25.517419 master-2 kubenswrapper[4776]: I1011 10:45:25.516770 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: I1011 10:45:28.310319 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:28.310412 master-2 kubenswrapper[4776]: I1011 10:45:28.310382 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:30.516724 master-2 kubenswrapper[4776]: I1011 10:45:30.516580 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:30.517759 master-2 kubenswrapper[4776]: I1011 10:45:30.516739 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:32.156343 master-2 kubenswrapper[4776]: I1011 10:45:32.156284 4776 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:45:32.156883 master-2 kubenswrapper[4776]: I1011 10:45:32.156649 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="2c4a583adfee975da84510940117e71a" containerName="etcdctl" containerID="cri-o://cc8943c5b4823b597a38ede8102a3e667afad877c11be87f804a4d9fcdbf5687" gracePeriod=30 Oct 11 10:45:32.156883 master-2 kubenswrapper[4776]: I1011 10:45:32.156727 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-rev" containerID="cri-o://8aca7dd04fbd9bc97f886a62f0850ed592b9776f6bcf8d57f228ba1b4d57e0dd" gracePeriod=30 Oct 11 10:45:32.156984 master-2 kubenswrapper[4776]: I1011 10:45:32.156742 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-metrics" containerID="cri-o://56dc1b99eea54bd4bc4092f0e7a9e5c850ceefafdfda928c057fe6d1b40b5d1d" gracePeriod=30 Oct 11 10:45:32.156984 master-2 kubenswrapper[4776]: I1011 10:45:32.156875 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd" containerID="cri-o://8ca4916746dcde3d1a7ba8c08259545f440c11f186b53d82aba07a17030c92d1" gracePeriod=30 Oct 11 10:45:32.156984 master-2 kubenswrapper[4776]: I1011 10:45:32.156797 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-2" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-readyz" containerID="cri-o://431d1c1363285965b411f06e0338b448e40c4fef537351ea45fb00ac08129886" gracePeriod=30 Oct 11 10:45:32.161685 master-2 kubenswrapper[4776]: I1011 10:45:32.161614 4776 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:45:32.161995 master-2 kubenswrapper[4776]: E1011 10:45:32.161958 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="extract-content" Oct 11 10:45:32.161995 master-2 kubenswrapper[4776]: I1011 10:45:32.161988 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="extract-content" Oct 11 10:45:32.162093 master-2 kubenswrapper[4776]: E1011 10:45:32.162008 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-metrics" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: I1011 10:45:32.162773 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-metrics" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: E1011 10:45:32.162839 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4aea0e1-d6c8-4542-85c7-e46b945d61a0" containerName="collect-profiles" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: I1011 10:45:32.162853 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aea0e1-d6c8-4542-85c7-e46b945d61a0" containerName="collect-profiles" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: E1011 10:45:32.162873 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-readyz" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: I1011 10:45:32.162881 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-readyz" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: E1011 10:45:32.162893 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="extract-utilities" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: I1011 10:45:32.162901 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="extract-utilities" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: E1011 10:45:32.162913 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-resources-copy" Oct 11 10:45:32.162912 master-2 kubenswrapper[4776]: I1011 10:45:32.162921 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-resources-copy" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: E1011 10:45:32.162936 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-ensure-env-vars" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.162944 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-ensure-env-vars" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: E1011 10:45:32.162960 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcdctl" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.162968 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcdctl" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: E1011 10:45:32.162979 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.162994 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: E1011 10:45:32.163011 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="registry-server" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163019 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="registry-server" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: E1011 10:45:32.163030 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="setup" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163038 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="setup" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: E1011 10:45:32.163056 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-rev" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163064 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-rev" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163378 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4a583adfee975da84510940117e71a" containerName="etcdctl" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163406 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-readyz" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163421 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-rev" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163441 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4aea0e1-d6c8-4542-85c7-e46b945d61a0" containerName="collect-profiles" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163452 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163466 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4a583adfee975da84510940117e71a" containerName="etcd-metrics" Oct 11 10:45:32.163692 master-2 kubenswrapper[4776]: I1011 10:45:32.163483 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="89013c68-6873-4f47-bd39-d7eae57cd89b" containerName="registry-server" Oct 11 10:45:32.277627 master-2 kubenswrapper[4776]: I1011 10:45:32.277558 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-cert-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.277850 master-2 kubenswrapper[4776]: I1011 10:45:32.277637 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-static-pod-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.277850 master-2 kubenswrapper[4776]: I1011 10:45:32.277663 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-data-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.277850 master-2 kubenswrapper[4776]: I1011 10:45:32.277714 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-log-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.277850 master-2 kubenswrapper[4776]: I1011 10:45:32.277744 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-usr-local-bin\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.277850 master-2 kubenswrapper[4776]: I1011 10:45:32.277833 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-resource-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379424 master-2 kubenswrapper[4776]: I1011 10:45:32.379331 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-cert-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379424 master-2 kubenswrapper[4776]: I1011 10:45:32.379394 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-static-pod-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379424 master-2 kubenswrapper[4776]: I1011 10:45:32.379413 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-data-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379424 master-2 kubenswrapper[4776]: I1011 10:45:32.379439 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-log-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379443 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-cert-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379523 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-data-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379464 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-usr-local-bin\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379534 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-log-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379586 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-usr-local-bin\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379577 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-static-pod-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379641 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-resource-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.379941 master-2 kubenswrapper[4776]: I1011 10:45:32.379721 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cd7826f9db5842f000a071fd58a1ae79-resource-dir\") pod \"etcd-master-2\" (UID: \"cd7826f9db5842f000a071fd58a1ae79\") " pod="openshift-etcd/etcd-master-2" Oct 11 10:45:32.458252 master-2 kubenswrapper[4776]: I1011 10:45:32.458047 4776 generic.go:334] "Generic (PLEG): container finished" podID="56e683e1-6c74-4998-ac94-05f58a65965f" containerID="903137cd4045917d2201001cea3f552800cf2d073b4309b1386b4ec5c2d61b48" exitCode=0 Oct 11 10:45:32.458252 master-2 kubenswrapper[4776]: I1011 10:45:32.458170 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-2" event={"ID":"56e683e1-6c74-4998-ac94-05f58a65965f","Type":"ContainerDied","Data":"903137cd4045917d2201001cea3f552800cf2d073b4309b1386b4ec5c2d61b48"} Oct 11 10:45:32.466964 master-2 kubenswrapper[4776]: I1011 10:45:32.466878 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-rev/0.log" Oct 11 10:45:32.469420 master-2 kubenswrapper[4776]: I1011 10:45:32.469253 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-metrics/0.log" Oct 11 10:45:32.472399 master-2 kubenswrapper[4776]: I1011 10:45:32.472288 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="8aca7dd04fbd9bc97f886a62f0850ed592b9776f6bcf8d57f228ba1b4d57e0dd" exitCode=2 Oct 11 10:45:32.472399 master-2 kubenswrapper[4776]: I1011 10:45:32.472342 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="431d1c1363285965b411f06e0338b448e40c4fef537351ea45fb00ac08129886" exitCode=0 Oct 11 10:45:32.472399 master-2 kubenswrapper[4776]: I1011 10:45:32.472375 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="56dc1b99eea54bd4bc4092f0e7a9e5c850ceefafdfda928c057fe6d1b40b5d1d" exitCode=2 Oct 11 10:45:32.501229 master-2 kubenswrapper[4776]: I1011 10:45:32.501146 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-2" oldPodUID="2c4a583adfee975da84510940117e71a" podUID="cd7826f9db5842f000a071fd58a1ae79" Oct 11 10:45:33.031578 master-2 kubenswrapper[4776]: I1011 10:45:33.031447 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:33.031578 master-2 kubenswrapper[4776]: I1011 10:45:33.031542 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: I1011 10:45:33.311014 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:33.311323 master-2 kubenswrapper[4776]: I1011 10:45:33.311100 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:33.826809 master-2 kubenswrapper[4776]: I1011 10:45:33.826726 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:33.902920 master-2 kubenswrapper[4776]: I1011 10:45:33.902851 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-kubelet-dir\") pod \"56e683e1-6c74-4998-ac94-05f58a65965f\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " Oct 11 10:45:33.903140 master-2 kubenswrapper[4776]: I1011 10:45:33.902935 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56e683e1-6c74-4998-ac94-05f58a65965f-kube-api-access\") pod \"56e683e1-6c74-4998-ac94-05f58a65965f\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " Oct 11 10:45:33.903140 master-2 kubenswrapper[4776]: I1011 10:45:33.902989 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-var-lock\") pod \"56e683e1-6c74-4998-ac94-05f58a65965f\" (UID: \"56e683e1-6c74-4998-ac94-05f58a65965f\") " Oct 11 10:45:33.903402 master-2 kubenswrapper[4776]: I1011 10:45:33.903364 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-var-lock" (OuterVolumeSpecName: "var-lock") pod "56e683e1-6c74-4998-ac94-05f58a65965f" (UID: "56e683e1-6c74-4998-ac94-05f58a65965f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:45:33.903464 master-2 kubenswrapper[4776]: I1011 10:45:33.903394 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "56e683e1-6c74-4998-ac94-05f58a65965f" (UID: "56e683e1-6c74-4998-ac94-05f58a65965f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:45:33.906533 master-2 kubenswrapper[4776]: I1011 10:45:33.906476 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e683e1-6c74-4998-ac94-05f58a65965f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "56e683e1-6c74-4998-ac94-05f58a65965f" (UID: "56e683e1-6c74-4998-ac94-05f58a65965f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:45:34.005228 master-2 kubenswrapper[4776]: I1011 10:45:34.005169 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56e683e1-6c74-4998-ac94-05f58a65965f-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:34.005228 master-2 kubenswrapper[4776]: I1011 10:45:34.005214 4776 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-var-lock\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:34.005228 master-2 kubenswrapper[4776]: I1011 10:45:34.005227 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56e683e1-6c74-4998-ac94-05f58a65965f-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:45:34.489032 master-2 kubenswrapper[4776]: I1011 10:45:34.488959 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-2" event={"ID":"56e683e1-6c74-4998-ac94-05f58a65965f","Type":"ContainerDied","Data":"a100df13f60f80ad4bc323ad2221d41c81f5ea6f0551eaf971a5f40bf6de7a88"} Oct 11 10:45:34.489032 master-2 kubenswrapper[4776]: I1011 10:45:34.489000 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a100df13f60f80ad4bc323ad2221d41c81f5ea6f0551eaf971a5f40bf6de7a88" Oct 11 10:45:34.489032 master-2 kubenswrapper[4776]: I1011 10:45:34.489020 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-2" Oct 11 10:45:35.515661 master-2 kubenswrapper[4776]: I1011 10:45:35.515571 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:35.515661 master-2 kubenswrapper[4776]: I1011 10:45:35.515630 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:38.031564 master-2 kubenswrapper[4776]: I1011 10:45:38.031511 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:38.031564 master-2 kubenswrapper[4776]: I1011 10:45:38.031565 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: I1011 10:45:38.307923 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:38.308035 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:38.309764 master-2 kubenswrapper[4776]: I1011 10:45:38.309728 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:40.515936 master-2 kubenswrapper[4776]: I1011 10:45:40.515874 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:40.516507 master-2 kubenswrapper[4776]: I1011 10:45:40.515935 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:43.031553 master-2 kubenswrapper[4776]: I1011 10:45:43.031469 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:43.031553 master-2 kubenswrapper[4776]: I1011 10:45:43.031549 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:43.032375 master-2 kubenswrapper[4776]: I1011 10:45:43.031638 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:45:43.033639 master-2 kubenswrapper[4776]: I1011 10:45:43.033569 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:43.033778 master-2 kubenswrapper[4776]: I1011 10:45:43.033724 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: I1011 10:45:43.306580 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:43.306793 master-2 kubenswrapper[4776]: I1011 10:45:43.306651 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:45.515529 master-2 kubenswrapper[4776]: I1011 10:45:45.515449 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:45.515529 master-2 kubenswrapper[4776]: I1011 10:45:45.515518 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:48.032171 master-2 kubenswrapper[4776]: I1011 10:45:48.032096 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:48.032171 master-2 kubenswrapper[4776]: I1011 10:45:48.032168 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: I1011 10:45:48.307049 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:48.307265 master-2 kubenswrapper[4776]: I1011 10:45:48.307137 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:50.515816 master-2 kubenswrapper[4776]: I1011 10:45:50.515753 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:50.516283 master-2 kubenswrapper[4776]: I1011 10:45:50.515810 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:53.031541 master-2 kubenswrapper[4776]: I1011 10:45:53.031410 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:53.031541 master-2 kubenswrapper[4776]: I1011 10:45:53.031491 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: I1011 10:45:53.308535 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:53.308750 master-2 kubenswrapper[4776]: I1011 10:45:53.308643 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:45:55.519772 master-2 kubenswrapper[4776]: I1011 10:45:55.516181 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:45:55.519772 master-2 kubenswrapper[4776]: I1011 10:45:55.516291 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:45:58.031930 master-2 kubenswrapper[4776]: I1011 10:45:58.031855 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:45:58.031930 master-2 kubenswrapper[4776]: I1011 10:45:58.031924 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: I1011 10:45:58.310439 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]log ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]api-openshift-apiserver-available ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]api-openshift-oauth-apiserver-available ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]informer-sync ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-informers ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/crd-informer-synced ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/rbac/bootstrap-roles ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/bootstrap-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-registration-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]autoregister-completion ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: [-]shutdown failed: reason withheld Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: readyz check failed Oct 11 10:45:58.310647 master-2 kubenswrapper[4776]: I1011 10:45:58.310521 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 10:46:00.516179 master-2 kubenswrapper[4776]: I1011 10:46:00.516100 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:46:00.516725 master-2 kubenswrapper[4776]: I1011 10:46:00.516190 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:46:02.698240 master-2 kubenswrapper[4776]: I1011 10:46:02.698185 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-rev/0.log" Oct 11 10:46:02.699881 master-2 kubenswrapper[4776]: I1011 10:46:02.699835 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-metrics/0.log" Oct 11 10:46:02.700560 master-2 kubenswrapper[4776]: I1011 10:46:02.700519 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd/0.log" Oct 11 10:46:02.701155 master-2 kubenswrapper[4776]: I1011 10:46:02.701125 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcdctl/0.log" Oct 11 10:46:02.702803 master-2 kubenswrapper[4776]: I1011 10:46:02.702629 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="8ca4916746dcde3d1a7ba8c08259545f440c11f186b53d82aba07a17030c92d1" exitCode=137 Oct 11 10:46:02.702803 master-2 kubenswrapper[4776]: I1011 10:46:02.702663 4776 generic.go:334] "Generic (PLEG): container finished" podID="2c4a583adfee975da84510940117e71a" containerID="cc8943c5b4823b597a38ede8102a3e667afad877c11be87f804a4d9fcdbf5687" exitCode=137 Oct 11 10:46:02.734784 master-2 kubenswrapper[4776]: I1011 10:46:02.734657 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-rev/0.log" Oct 11 10:46:02.736281 master-2 kubenswrapper[4776]: I1011 10:46:02.736231 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-metrics/0.log" Oct 11 10:46:02.736994 master-2 kubenswrapper[4776]: I1011 10:46:02.736948 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd/0.log" Oct 11 10:46:02.737593 master-2 kubenswrapper[4776]: I1011 10:46:02.737537 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcdctl/0.log" Oct 11 10:46:02.739232 master-2 kubenswrapper[4776]: I1011 10:46:02.739181 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:02.755615 master-2 kubenswrapper[4776]: I1011 10:46:02.755541 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-2" oldPodUID="2c4a583adfee975da84510940117e71a" podUID="cd7826f9db5842f000a071fd58a1ae79" Oct 11 10:46:02.885246 master-2 kubenswrapper[4776]: I1011 10:46:02.885136 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-resource-dir\") pod \"2c4a583adfee975da84510940117e71a\" (UID: \"2c4a583adfee975da84510940117e71a\") " Oct 11 10:46:02.885478 master-2 kubenswrapper[4776]: I1011 10:46:02.885235 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-usr-local-bin\") pod \"2c4a583adfee975da84510940117e71a\" (UID: \"2c4a583adfee975da84510940117e71a\") " Oct 11 10:46:02.885478 master-2 kubenswrapper[4776]: I1011 10:46:02.885324 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-static-pod-dir\") pod \"2c4a583adfee975da84510940117e71a\" (UID: \"2c4a583adfee975da84510940117e71a\") " Oct 11 10:46:02.885478 master-2 kubenswrapper[4776]: I1011 10:46:02.885365 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2c4a583adfee975da84510940117e71a" (UID: "2c4a583adfee975da84510940117e71a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:02.885478 master-2 kubenswrapper[4776]: I1011 10:46:02.885404 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-cert-dir\") pod \"2c4a583adfee975da84510940117e71a\" (UID: \"2c4a583adfee975da84510940117e71a\") " Oct 11 10:46:02.885658 master-2 kubenswrapper[4776]: I1011 10:46:02.885493 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-log-dir\") pod \"2c4a583adfee975da84510940117e71a\" (UID: \"2c4a583adfee975da84510940117e71a\") " Oct 11 10:46:02.885658 master-2 kubenswrapper[4776]: I1011 10:46:02.885399 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "2c4a583adfee975da84510940117e71a" (UID: "2c4a583adfee975da84510940117e71a"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:02.885658 master-2 kubenswrapper[4776]: I1011 10:46:02.885436 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "2c4a583adfee975da84510940117e71a" (UID: "2c4a583adfee975da84510940117e71a"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:02.885658 master-2 kubenswrapper[4776]: I1011 10:46:02.885462 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "2c4a583adfee975da84510940117e71a" (UID: "2c4a583adfee975da84510940117e71a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:02.885848 master-2 kubenswrapper[4776]: I1011 10:46:02.885664 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-log-dir" (OuterVolumeSpecName: "log-dir") pod "2c4a583adfee975da84510940117e71a" (UID: "2c4a583adfee975da84510940117e71a"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:02.885848 master-2 kubenswrapper[4776]: I1011 10:46:02.885737 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-data-dir\") pod \"2c4a583adfee975da84510940117e71a\" (UID: \"2c4a583adfee975da84510940117e71a\") " Oct 11 10:46:02.885848 master-2 kubenswrapper[4776]: I1011 10:46:02.885829 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-data-dir" (OuterVolumeSpecName: "data-dir") pod "2c4a583adfee975da84510940117e71a" (UID: "2c4a583adfee975da84510940117e71a"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:02.886375 master-2 kubenswrapper[4776]: I1011 10:46:02.886331 4776 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-data-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:02.886427 master-2 kubenswrapper[4776]: I1011 10:46:02.886401 4776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-resource-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:02.886427 master-2 kubenswrapper[4776]: I1011 10:46:02.886424 4776 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-usr-local-bin\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:02.886517 master-2 kubenswrapper[4776]: I1011 10:46:02.886444 4776 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-static-pod-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:02.886517 master-2 kubenswrapper[4776]: I1011 10:46:02.886496 4776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-cert-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:02.886517 master-2 kubenswrapper[4776]: I1011 10:46:02.886513 4776 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2c4a583adfee975da84510940117e71a-log-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:03.031138 master-2 kubenswrapper[4776]: I1011 10:46:03.030960 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:46:03.031138 master-2 kubenswrapper[4776]: I1011 10:46:03.031040 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:46:03.303484 master-2 kubenswrapper[4776]: I1011 10:46:03.303287 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.12:6443/readyz\": dial tcp 192.168.34.12:6443: connect: connection refused" start-of-body= Oct 11 10:46:03.303484 master-2 kubenswrapper[4776]: I1011 10:46:03.303385 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:6443/readyz\": dial tcp 192.168.34.12:6443: connect: connection refused" Oct 11 10:46:03.710858 master-2 kubenswrapper[4776]: I1011 10:46:03.710741 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-rev/0.log" Oct 11 10:46:03.713457 master-2 kubenswrapper[4776]: I1011 10:46:03.713405 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd-metrics/0.log" Oct 11 10:46:03.714262 master-2 kubenswrapper[4776]: I1011 10:46:03.714220 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcd/0.log" Oct 11 10:46:03.714845 master-2 kubenswrapper[4776]: I1011 10:46:03.714805 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_2c4a583adfee975da84510940117e71a/etcdctl/0.log" Oct 11 10:46:03.716184 master-2 kubenswrapper[4776]: I1011 10:46:03.716136 4776 scope.go:117] "RemoveContainer" containerID="8aca7dd04fbd9bc97f886a62f0850ed592b9776f6bcf8d57f228ba1b4d57e0dd" Oct 11 10:46:03.716422 master-2 kubenswrapper[4776]: I1011 10:46:03.716217 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:03.722770 master-2 kubenswrapper[4776]: I1011 10:46:03.722702 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-2" oldPodUID="2c4a583adfee975da84510940117e71a" podUID="cd7826f9db5842f000a071fd58a1ae79" Oct 11 10:46:03.739414 master-2 kubenswrapper[4776]: I1011 10:46:03.739355 4776 scope.go:117] "RemoveContainer" containerID="431d1c1363285965b411f06e0338b448e40c4fef537351ea45fb00ac08129886" Oct 11 10:46:03.748009 master-2 kubenswrapper[4776]: I1011 10:46:03.747943 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-2" oldPodUID="2c4a583adfee975da84510940117e71a" podUID="cd7826f9db5842f000a071fd58a1ae79" Oct 11 10:46:03.757090 master-2 kubenswrapper[4776]: I1011 10:46:03.757053 4776 scope.go:117] "RemoveContainer" containerID="56dc1b99eea54bd4bc4092f0e7a9e5c850ceefafdfda928c057fe6d1b40b5d1d" Oct 11 10:46:03.776663 master-2 kubenswrapper[4776]: I1011 10:46:03.776617 4776 scope.go:117] "RemoveContainer" containerID="8ca4916746dcde3d1a7ba8c08259545f440c11f186b53d82aba07a17030c92d1" Oct 11 10:46:03.800965 master-2 kubenswrapper[4776]: I1011 10:46:03.800889 4776 scope.go:117] "RemoveContainer" containerID="cc8943c5b4823b597a38ede8102a3e667afad877c11be87f804a4d9fcdbf5687" Oct 11 10:46:03.822285 master-2 kubenswrapper[4776]: I1011 10:46:03.822226 4776 scope.go:117] "RemoveContainer" containerID="7f66f4dfc685ae37f005fda864fb1584f27f6f6ea0f20644d46be5a7beee01cb" Oct 11 10:46:03.847794 master-2 kubenswrapper[4776]: I1011 10:46:03.847762 4776 scope.go:117] "RemoveContainer" containerID="8727285f17e12497f3cb86862360f5e6e70608ca5f775837d9eae36b1c220a0e" Oct 11 10:46:03.867510 master-2 kubenswrapper[4776]: I1011 10:46:03.867446 4776 scope.go:117] "RemoveContainer" containerID="8498ac9ed169687a9469df6d265ee2510783d932551f6caa45673a37deb3682e" Oct 11 10:46:04.072071 master-2 kubenswrapper[4776]: I1011 10:46:04.071849 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c4a583adfee975da84510940117e71a" path="/var/lib/kubelet/pods/2c4a583adfee975da84510940117e71a/volumes" Oct 11 10:46:04.713811 master-2 kubenswrapper[4776]: I1011 10:46:04.712882 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-2_9041570beb5002e8da158e70e12f0c16/kube-apiserver-cert-syncer/0.log" Oct 11 10:46:04.713811 master-2 kubenswrapper[4776]: I1011 10:46:04.713492 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:04.721372 master-2 kubenswrapper[4776]: I1011 10:46:04.721277 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-2" oldPodUID="9041570beb5002e8da158e70e12f0c16" podUID="978811670a28b21932e323b181b31435" Oct 11 10:46:04.723321 master-2 kubenswrapper[4776]: I1011 10:46:04.723282 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-cert-dir\") pod \"9041570beb5002e8da158e70e12f0c16\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " Oct 11 10:46:04.723412 master-2 kubenswrapper[4776]: I1011 10:46:04.723351 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-audit-dir\") pod \"9041570beb5002e8da158e70e12f0c16\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " Oct 11 10:46:04.723471 master-2 kubenswrapper[4776]: I1011 10:46:04.723449 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-resource-dir\") pod \"9041570beb5002e8da158e70e12f0c16\" (UID: \"9041570beb5002e8da158e70e12f0c16\") " Oct 11 10:46:04.723943 master-2 kubenswrapper[4776]: I1011 10:46:04.723883 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "9041570beb5002e8da158e70e12f0c16" (UID: "9041570beb5002e8da158e70e12f0c16"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:04.723994 master-2 kubenswrapper[4776]: I1011 10:46:04.723898 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9041570beb5002e8da158e70e12f0c16" (UID: "9041570beb5002e8da158e70e12f0c16"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:04.723994 master-2 kubenswrapper[4776]: I1011 10:46:04.723943 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "9041570beb5002e8da158e70e12f0c16" (UID: "9041570beb5002e8da158e70e12f0c16"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:04.730050 master-2 kubenswrapper[4776]: I1011 10:46:04.730004 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-2_9041570beb5002e8da158e70e12f0c16/kube-apiserver-cert-syncer/0.log" Oct 11 10:46:04.730639 master-2 kubenswrapper[4776]: I1011 10:46:04.730603 4776 generic.go:334] "Generic (PLEG): container finished" podID="9041570beb5002e8da158e70e12f0c16" containerID="ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da" exitCode=0 Oct 11 10:46:04.730708 master-2 kubenswrapper[4776]: I1011 10:46:04.730701 4776 scope.go:117] "RemoveContainer" containerID="b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57" Oct 11 10:46:04.730922 master-2 kubenswrapper[4776]: I1011 10:46:04.730883 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:04.736953 master-2 kubenswrapper[4776]: I1011 10:46:04.736910 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-2" oldPodUID="9041570beb5002e8da158e70e12f0c16" podUID="978811670a28b21932e323b181b31435" Oct 11 10:46:04.743055 master-2 kubenswrapper[4776]: I1011 10:46:04.743026 4776 scope.go:117] "RemoveContainer" containerID="784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6" Oct 11 10:46:04.748458 master-2 kubenswrapper[4776]: I1011 10:46:04.748436 4776 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-2" oldPodUID="9041570beb5002e8da158e70e12f0c16" podUID="978811670a28b21932e323b181b31435" Oct 11 10:46:04.754567 master-2 kubenswrapper[4776]: I1011 10:46:04.754551 4776 scope.go:117] "RemoveContainer" containerID="f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9" Oct 11 10:46:04.767390 master-2 kubenswrapper[4776]: I1011 10:46:04.767342 4776 scope.go:117] "RemoveContainer" containerID="e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510" Oct 11 10:46:04.778739 master-2 kubenswrapper[4776]: I1011 10:46:04.778723 4776 scope.go:117] "RemoveContainer" containerID="ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da" Oct 11 10:46:04.792649 master-2 kubenswrapper[4776]: I1011 10:46:04.792598 4776 scope.go:117] "RemoveContainer" containerID="f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1" Oct 11 10:46:04.825289 master-2 kubenswrapper[4776]: I1011 10:46:04.825246 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:04.825289 master-2 kubenswrapper[4776]: I1011 10:46:04.825272 4776 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-resource-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:04.825289 master-2 kubenswrapper[4776]: I1011 10:46:04.825281 4776 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9041570beb5002e8da158e70e12f0c16-cert-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:04.839041 master-2 kubenswrapper[4776]: I1011 10:46:04.838994 4776 scope.go:117] "RemoveContainer" containerID="b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57" Oct 11 10:46:04.839550 master-2 kubenswrapper[4776]: E1011 10:46:04.839486 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57\": container with ID starting with b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57 not found: ID does not exist" containerID="b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57" Oct 11 10:46:04.839550 master-2 kubenswrapper[4776]: I1011 10:46:04.839530 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57"} err="failed to get container status \"b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57\": rpc error: code = NotFound desc = could not find container \"b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57\": container with ID starting with b4c08429cc86879db5480ec2f3dde9881912ddad4ade9f5493ce6fec4a332c57 not found: ID does not exist" Oct 11 10:46:04.839704 master-2 kubenswrapper[4776]: I1011 10:46:04.839555 4776 scope.go:117] "RemoveContainer" containerID="784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6" Oct 11 10:46:04.839994 master-2 kubenswrapper[4776]: E1011 10:46:04.839948 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6\": container with ID starting with 784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6 not found: ID does not exist" containerID="784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6" Oct 11 10:46:04.840047 master-2 kubenswrapper[4776]: I1011 10:46:04.839992 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6"} err="failed to get container status \"784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6\": rpc error: code = NotFound desc = could not find container \"784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6\": container with ID starting with 784332c2c5f2a3346cc383f38ab14cdd92d396312446ecf270ce977e320356d6 not found: ID does not exist" Oct 11 10:46:04.840047 master-2 kubenswrapper[4776]: I1011 10:46:04.840028 4776 scope.go:117] "RemoveContainer" containerID="f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9" Oct 11 10:46:04.840450 master-2 kubenswrapper[4776]: E1011 10:46:04.840416 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9\": container with ID starting with f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9 not found: ID does not exist" containerID="f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9" Oct 11 10:46:04.840450 master-2 kubenswrapper[4776]: I1011 10:46:04.840442 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9"} err="failed to get container status \"f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9\": rpc error: code = NotFound desc = could not find container \"f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9\": container with ID starting with f6826bd2718c0b90e67c1bb2009c44eb47d7f7721d5c8f0edced854940bbddd9 not found: ID does not exist" Oct 11 10:46:04.840556 master-2 kubenswrapper[4776]: I1011 10:46:04.840457 4776 scope.go:117] "RemoveContainer" containerID="e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510" Oct 11 10:46:04.841300 master-2 kubenswrapper[4776]: E1011 10:46:04.841269 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510\": container with ID starting with e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510 not found: ID does not exist" containerID="e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510" Oct 11 10:46:04.841300 master-2 kubenswrapper[4776]: I1011 10:46:04.841289 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510"} err="failed to get container status \"e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510\": rpc error: code = NotFound desc = could not find container \"e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510\": container with ID starting with e5a761e5cd190c111a316da491e9c9e5c814df28daabedd342b07e0542bb8510 not found: ID does not exist" Oct 11 10:46:04.841416 master-2 kubenswrapper[4776]: I1011 10:46:04.841303 4776 scope.go:117] "RemoveContainer" containerID="ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da" Oct 11 10:46:04.841699 master-2 kubenswrapper[4776]: E1011 10:46:04.841628 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da\": container with ID starting with ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da not found: ID does not exist" containerID="ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da" Oct 11 10:46:04.841806 master-2 kubenswrapper[4776]: I1011 10:46:04.841698 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da"} err="failed to get container status \"ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da\": rpc error: code = NotFound desc = could not find container \"ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da\": container with ID starting with ad118c4e204bcb441ba29580363b17245d098543755056860caac051ca5006da not found: ID does not exist" Oct 11 10:46:04.841806 master-2 kubenswrapper[4776]: I1011 10:46:04.841738 4776 scope.go:117] "RemoveContainer" containerID="f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1" Oct 11 10:46:04.842121 master-2 kubenswrapper[4776]: E1011 10:46:04.842092 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1\": container with ID starting with f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1 not found: ID does not exist" containerID="f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1" Oct 11 10:46:04.842172 master-2 kubenswrapper[4776]: I1011 10:46:04.842118 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1"} err="failed to get container status \"f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1\": rpc error: code = NotFound desc = could not find container \"f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1\": container with ID starting with f6ada1ea5b943d4fa8b24de52a5bcc62ede0268ae6328e324cc9b6e2a99596b1 not found: ID does not exist" Oct 11 10:46:05.516131 master-2 kubenswrapper[4776]: I1011 10:46:05.516047 4776 patch_prober.go:28] interesting pod/apiserver-69df5d46bc-klwcv container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" start-of-body= Oct 11 10:46:05.516460 master-2 kubenswrapper[4776]: I1011 10:46:05.516169 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.83:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.83:8443: connect: connection refused" Oct 11 10:46:06.073961 master-2 kubenswrapper[4776]: I1011 10:46:06.073846 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9041570beb5002e8da158e70e12f0c16" path="/var/lib/kubelet/pods/9041570beb5002e8da158e70e12f0c16/volumes" Oct 11 10:46:08.031925 master-2 kubenswrapper[4776]: I1011 10:46:08.031822 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:46:08.031925 master-2 kubenswrapper[4776]: I1011 10:46:08.031914 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:46:08.058670 master-2 kubenswrapper[4776]: I1011 10:46:08.058586 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.12:6443/readyz\": dial tcp 192.168.34.12:6443: connect: connection refused" start-of-body= Oct 11 10:46:08.058888 master-2 kubenswrapper[4776]: I1011 10:46:08.058717 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:6443/readyz\": dial tcp 192.168.34.12:6443: connect: connection refused" Oct 11 10:46:08.303609 master-2 kubenswrapper[4776]: I1011 10:46:08.303407 4776 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-2 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.12:6443/readyz\": dial tcp 192.168.34.12:6443: connect: connection refused" start-of-body= Oct 11 10:46:08.303609 master-2 kubenswrapper[4776]: I1011 10:46:08.303494 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" podUID="1a003c5f-2a49-44fb-93a8-7a83319ce8e8" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:6443/readyz\": dial tcp 192.168.34.12:6443: connect: connection refused" Oct 11 10:46:09.058221 master-2 kubenswrapper[4776]: I1011 10:46:09.058144 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:09.072389 master-2 kubenswrapper[4776]: I1011 10:46:09.072344 4776 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="7ca61ebd-b6db-437f-b6d0-b91b94a84371" Oct 11 10:46:09.072389 master-2 kubenswrapper[4776]: I1011 10:46:09.072378 4776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" podUID="7ca61ebd-b6db-437f-b6d0-b91b94a84371" Oct 11 10:46:09.102986 master-2 kubenswrapper[4776]: I1011 10:46:09.102916 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:46:09.103272 master-2 kubenswrapper[4776]: I1011 10:46:09.103170 4776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:09.120229 master-2 kubenswrapper[4776]: I1011 10:46:09.120115 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:46:09.136110 master-2 kubenswrapper[4776]: I1011 10:46:09.134924 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:09.136110 master-2 kubenswrapper[4776]: I1011 10:46:09.135288 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-2"] Oct 11 10:46:09.165551 master-2 kubenswrapper[4776]: W1011 10:46:09.165482 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod978811670a28b21932e323b181b31435.slice/crio-722814ff0aca0b8ddc53d987ba277cf7f734178822ae0416db6f657648412168 WatchSource:0}: Error finding container 722814ff0aca0b8ddc53d987ba277cf7f734178822ae0416db6f657648412168: Status 404 returned error can't find the container with id 722814ff0aca0b8ddc53d987ba277cf7f734178822ae0416db6f657648412168 Oct 11 10:46:09.771890 master-2 kubenswrapper[4776]: I1011 10:46:09.771825 4776 generic.go:334] "Generic (PLEG): container finished" podID="978811670a28b21932e323b181b31435" containerID="c33f9dbc69178f562a6fbf097b9145cc3bcae184a07ac83d7567b22faaebbd11" exitCode=0 Oct 11 10:46:09.771890 master-2 kubenswrapper[4776]: I1011 10:46:09.771866 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerDied","Data":"c33f9dbc69178f562a6fbf097b9145cc3bcae184a07ac83d7567b22faaebbd11"} Oct 11 10:46:09.771890 master-2 kubenswrapper[4776]: I1011 10:46:09.771896 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerStarted","Data":"722814ff0aca0b8ddc53d987ba277cf7f734178822ae0416db6f657648412168"} Oct 11 10:46:09.773873 master-2 kubenswrapper[4776]: I1011 10:46:09.773843 4776 generic.go:334] "Generic (PLEG): container finished" podID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerID="f9151fc06dbd01664b47f868a0c43e1a9e6b83f5d73cdb1bae7462ef40f38776" exitCode=0 Oct 11 10:46:09.773873 master-2 kubenswrapper[4776]: I1011 10:46:09.773868 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerDied","Data":"f9151fc06dbd01664b47f868a0c43e1a9e6b83f5d73cdb1bae7462ef40f38776"} Oct 11 10:46:10.060658 master-2 kubenswrapper[4776]: I1011 10:46:10.060072 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:10.080651 master-2 kubenswrapper[4776]: I1011 10:46:10.080617 4776 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-2" podUID="681786c2-8b94-4c7d-a99f-804e1f9f044f" Oct 11 10:46:10.080651 master-2 kubenswrapper[4776]: I1011 10:46:10.080649 4776 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-2" podUID="681786c2-8b94-4c7d-a99f-804e1f9f044f" Oct 11 10:46:10.088006 master-2 kubenswrapper[4776]: I1011 10:46:10.087908 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:46:10.104848 master-2 kubenswrapper[4776]: I1011 10:46:10.104790 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:46:10.117249 master-2 kubenswrapper[4776]: I1011 10:46:10.117179 4776 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:10.123618 master-2 kubenswrapper[4776]: I1011 10:46:10.123537 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:46:10.163038 master-2 kubenswrapper[4776]: I1011 10:46:10.162980 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:10.192522 master-2 kubenswrapper[4776]: I1011 10:46:10.192449 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-2"] Oct 11 10:46:10.207598 master-2 kubenswrapper[4776]: I1011 10:46:10.207543 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-image-import-ca\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.207598 master-2 kubenswrapper[4776]: I1011 10:46:10.207593 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-config\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.207762 master-2 kubenswrapper[4776]: I1011 10:46:10.207626 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vljtf\" (UniqueName: \"kubernetes.io/projected/4125c617-d1f6-4f29-bae1-1165604b9cbd-kube-api-access-vljtf\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.207762 master-2 kubenswrapper[4776]: I1011 10:46:10.207704 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit-dir\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.207762 master-2 kubenswrapper[4776]: I1011 10:46:10.207738 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-encryption-config\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208164 master-2 kubenswrapper[4776]: I1011 10:46:10.208119 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-serving-cert\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208232 master-2 kubenswrapper[4776]: I1011 10:46:10.208178 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-client\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208232 master-2 kubenswrapper[4776]: I1011 10:46:10.208208 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-trusted-ca-bundle\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208318 master-2 kubenswrapper[4776]: I1011 10:46:10.208238 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-serving-ca\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208318 master-2 kubenswrapper[4776]: I1011 10:46:10.208262 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-node-pullsecrets\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208318 master-2 kubenswrapper[4776]: I1011 10:46:10.208285 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit\") pod \"4125c617-d1f6-4f29-bae1-1165604b9cbd\" (UID: \"4125c617-d1f6-4f29-bae1-1165604b9cbd\") " Oct 11 10:46:10.208769 master-2 kubenswrapper[4776]: I1011 10:46:10.208607 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:46:10.208893 master-2 kubenswrapper[4776]: I1011 10:46:10.208845 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit" (OuterVolumeSpecName: "audit") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:46:10.208893 master-2 kubenswrapper[4776]: I1011 10:46:10.208875 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:10.209207 master-2 kubenswrapper[4776]: I1011 10:46:10.209176 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:46:10.209311 master-2 kubenswrapper[4776]: I1011 10:46:10.209217 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:10.209311 master-2 kubenswrapper[4776]: I1011 10:46:10.209252 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:46:10.211239 master-2 kubenswrapper[4776]: I1011 10:46:10.211184 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:46:10.211357 master-2 kubenswrapper[4776]: I1011 10:46:10.211316 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4125c617-d1f6-4f29-bae1-1165604b9cbd-kube-api-access-vljtf" (OuterVolumeSpecName: "kube-api-access-vljtf") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "kube-api-access-vljtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:46:10.211501 master-2 kubenswrapper[4776]: I1011 10:46:10.211445 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:46:10.212631 master-2 kubenswrapper[4776]: I1011 10:46:10.212587 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:46:10.212848 master-2 kubenswrapper[4776]: I1011 10:46:10.212774 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-config" (OuterVolumeSpecName: "config") pod "4125c617-d1f6-4f29-bae1-1165604b9cbd" (UID: "4125c617-d1f6-4f29-bae1-1165604b9cbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:46:10.309245 master-2 kubenswrapper[4776]: I1011 10:46:10.309176 4776 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-encryption-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309245 master-2 kubenswrapper[4776]: I1011 10:46:10.309236 4776 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309245 master-2 kubenswrapper[4776]: I1011 10:46:10.309250 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-client\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309245 master-2 kubenswrapper[4776]: I1011 10:46:10.309258 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309268 4776 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-etcd-serving-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309277 4776 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-node-pullsecrets\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309285 4776 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309293 4776 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-image-import-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309301 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4125c617-d1f6-4f29-bae1-1165604b9cbd-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309311 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vljtf\" (UniqueName: \"kubernetes.io/projected/4125c617-d1f6-4f29-bae1-1165604b9cbd-kube-api-access-vljtf\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.309541 master-2 kubenswrapper[4776]: I1011 10:46:10.309318 4776 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4125c617-d1f6-4f29-bae1-1165604b9cbd-audit-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:10.784806 master-2 kubenswrapper[4776]: I1011 10:46:10.784378 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" event={"ID":"4125c617-d1f6-4f29-bae1-1165604b9cbd","Type":"ContainerDied","Data":"8da59f9f35574c5f3bacdb804091911baf908469aad4410d43906f030b48b831"} Oct 11 10:46:10.784806 master-2 kubenswrapper[4776]: I1011 10:46:10.784450 4776 scope.go:117] "RemoveContainer" containerID="e4993a00e7728dc437a4c6094596c369ce11c26b8ae277d77d9133c67e1933b9" Oct 11 10:46:10.784806 master-2 kubenswrapper[4776]: I1011 10:46:10.784644 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-69df5d46bc-klwcv" Oct 11 10:46:10.791685 master-2 kubenswrapper[4776]: I1011 10:46:10.791602 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerStarted","Data":"e6502be51ddefdca06d3a091a1356ff59f90e648646c60ca18073c7cc29dd884"} Oct 11 10:46:10.791867 master-2 kubenswrapper[4776]: I1011 10:46:10.791694 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerStarted","Data":"fab18106c8341976767bd5af4dbc1f1bac3d07bab245177bb31d7f4058237efa"} Oct 11 10:46:10.791867 master-2 kubenswrapper[4776]: I1011 10:46:10.791712 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerStarted","Data":"53abb969cf9ec7a6a0b3309d898dd34b335fe0c42ff8b613f60abc04e216e34c"} Oct 11 10:46:10.793324 master-2 kubenswrapper[4776]: I1011 10:46:10.793293 4776 generic.go:334] "Generic (PLEG): container finished" podID="cd7826f9db5842f000a071fd58a1ae79" containerID="473347917efdca54c9ba1fc2ce7b95dad4dd94ca6c0f5821dca541936ee87b10" exitCode=0 Oct 11 10:46:10.793324 master-2 kubenswrapper[4776]: I1011 10:46:10.793324 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerDied","Data":"473347917efdca54c9ba1fc2ce7b95dad4dd94ca6c0f5821dca541936ee87b10"} Oct 11 10:46:10.793324 master-2 kubenswrapper[4776]: I1011 10:46:10.793341 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerStarted","Data":"2f5dc325f87e3ed7b3a94bbd3cc2905c4a69e038d0597785cbb2ce2fdb2e9f37"} Oct 11 10:46:10.809232 master-2 kubenswrapper[4776]: I1011 10:46:10.809198 4776 scope.go:117] "RemoveContainer" containerID="f9151fc06dbd01664b47f868a0c43e1a9e6b83f5d73cdb1bae7462ef40f38776" Oct 11 10:46:10.837911 master-2 kubenswrapper[4776]: I1011 10:46:10.834071 4776 scope.go:117] "RemoveContainer" containerID="eb57f483b1fb4288bd615f1e2349b2230b6272e2d1ba16c1f8dcb73ce4999885" Oct 11 10:46:10.943684 master-2 kubenswrapper[4776]: I1011 10:46:10.943615 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-klwcv"] Oct 11 10:46:10.967947 master-2 kubenswrapper[4776]: I1011 10:46:10.967869 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-69df5d46bc-klwcv"] Oct 11 10:46:11.816885 master-2 kubenswrapper[4776]: I1011 10:46:11.816811 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerStarted","Data":"803d9f33ede284510ac06ea69345c389f6ba883c3072fbc05d47b05da5f8d05f"} Oct 11 10:46:11.817435 master-2 kubenswrapper[4776]: I1011 10:46:11.816891 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-2" event={"ID":"978811670a28b21932e323b181b31435","Type":"ContainerStarted","Data":"81bd40984e0ececaa997a36721a48f361a68869ec7f5f8ab9db73abd3b783282"} Oct 11 10:46:11.817435 master-2 kubenswrapper[4776]: I1011 10:46:11.817132 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:11.821767 master-2 kubenswrapper[4776]: I1011 10:46:11.821658 4776 generic.go:334] "Generic (PLEG): container finished" podID="cd7826f9db5842f000a071fd58a1ae79" containerID="e75f231137713f045f1201a61014d9ccf9db84df88d66c8356e35c660a504624" exitCode=0 Oct 11 10:46:11.821843 master-2 kubenswrapper[4776]: I1011 10:46:11.821714 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerDied","Data":"e75f231137713f045f1201a61014d9ccf9db84df88d66c8356e35c660a504624"} Oct 11 10:46:11.870697 master-2 kubenswrapper[4776]: I1011 10:46:11.869169 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-2" podStartSLOduration=2.869138731 podStartE2EDuration="2.869138731s" podCreationTimestamp="2025-10-11 10:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:46:11.855208774 +0000 UTC m=+1206.639635493" watchObservedRunningTime="2025-10-11 10:46:11.869138731 +0000 UTC m=+1206.653565550" Oct 11 10:46:12.065564 master-2 kubenswrapper[4776]: I1011 10:46:12.065478 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" path="/var/lib/kubelet/pods/4125c617-d1f6-4f29-bae1-1165604b9cbd/volumes" Oct 11 10:46:12.831686 master-2 kubenswrapper[4776]: I1011 10:46:12.831598 4776 generic.go:334] "Generic (PLEG): container finished" podID="cd7826f9db5842f000a071fd58a1ae79" containerID="07cf5720cb90dab3edd879f83c1da3f7b2c6567ac99e60fef063fc76ab68476f" exitCode=0 Oct 11 10:46:12.831686 master-2 kubenswrapper[4776]: I1011 10:46:12.831649 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerDied","Data":"07cf5720cb90dab3edd879f83c1da3f7b2c6567ac99e60fef063fc76ab68476f"} Oct 11 10:46:13.031041 master-2 kubenswrapper[4776]: I1011 10:46:13.030991 4776 patch_prober.go:28] interesting pod/etcd-guard-master-2 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" start-of-body= Oct 11 10:46:13.031118 master-2 kubenswrapper[4776]: I1011 10:46:13.031043 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-2" podUID="9314095b-1661-46bd-8e19-2741d9d758fa" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.12:9980/readyz\": dial tcp 192.168.34.12:9980: connect: connection refused" Oct 11 10:46:13.306695 master-2 kubenswrapper[4776]: I1011 10:46:13.306633 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-2" Oct 11 10:46:13.852639 master-2 kubenswrapper[4776]: I1011 10:46:13.852558 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerStarted","Data":"0f53e46f2ca9a8a7f2aece0c78efd9c6ac75b85448fc5deac2bf1f78f0dfd137"} Oct 11 10:46:13.852639 master-2 kubenswrapper[4776]: I1011 10:46:13.852624 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerStarted","Data":"d47574c8ea8ad03e448653e7ae94459c94135291b8d46f602eff6c7e32ba5c40"} Oct 11 10:46:13.852639 master-2 kubenswrapper[4776]: I1011 10:46:13.852637 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerStarted","Data":"d1a0e578a5f5b18f8830b2435cf57c7cfd2e679c4028a45956e368e4891bfa04"} Oct 11 10:46:13.852639 master-2 kubenswrapper[4776]: I1011 10:46:13.852649 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerStarted","Data":"8834904950d9fc9a68f93ae37e78f800cc8f9a8eb962a08f0b62f9e4809cf65a"} Oct 11 10:46:14.135918 master-2 kubenswrapper[4776]: I1011 10:46:14.135790 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:14.135918 master-2 kubenswrapper[4776]: I1011 10:46:14.135849 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:14.141414 master-2 kubenswrapper[4776]: I1011 10:46:14.141380 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:14.865429 master-2 kubenswrapper[4776]: I1011 10:46:14.865358 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-2" event={"ID":"cd7826f9db5842f000a071fd58a1ae79","Type":"ContainerStarted","Data":"270ff2c4f6bcd14e58618f09b77ff83eebe14d1109545f40f25b1270461f3ef3"} Oct 11 10:46:14.869654 master-2 kubenswrapper[4776]: I1011 10:46:14.869627 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:14.930698 master-2 kubenswrapper[4776]: I1011 10:46:14.930588 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-2" podStartSLOduration=4.930564563 podStartE2EDuration="4.930564563s" podCreationTimestamp="2025-10-11 10:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:46:14.926269136 +0000 UTC m=+1209.710695845" watchObservedRunningTime="2025-10-11 10:46:14.930564563 +0000 UTC m=+1209.714991302" Oct 11 10:46:15.171664 master-2 kubenswrapper[4776]: I1011 10:46:15.171611 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:18.071054 master-2 kubenswrapper[4776]: I1011 10:46:18.070978 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-2" Oct 11 10:46:19.812939 master-2 kubenswrapper[4776]: I1011 10:46:19.812852 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-8865994fd-5kbfp"] Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: E1011 10:46:19.813234 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver-check-endpoints" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813255 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver-check-endpoints" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: E1011 10:46:19.813275 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e683e1-6c74-4998-ac94-05f58a65965f" containerName="installer" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813287 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e683e1-6c74-4998-ac94-05f58a65965f" containerName="installer" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: E1011 10:46:19.813313 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813327 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: E1011 10:46:19.813342 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="fix-audit-permissions" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813354 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="fix-audit-permissions" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813550 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813572 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4125c617-d1f6-4f29-bae1-1165604b9cbd" containerName="openshift-apiserver-check-endpoints" Oct 11 10:46:19.813922 master-2 kubenswrapper[4776]: I1011 10:46:19.813592 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e683e1-6c74-4998-ac94-05f58a65965f" containerName="installer" Oct 11 10:46:19.814794 master-2 kubenswrapper[4776]: I1011 10:46:19.814757 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.817867 master-2 kubenswrapper[4776]: I1011 10:46:19.817807 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 10:46:19.818399 master-2 kubenswrapper[4776]: I1011 10:46:19.818332 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 10:46:19.818486 master-2 kubenswrapper[4776]: I1011 10:46:19.818454 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 10:46:19.818949 master-2 kubenswrapper[4776]: I1011 10:46:19.818858 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 10:46:19.819592 master-2 kubenswrapper[4776]: I1011 10:46:19.819548 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 10:46:19.819950 master-2 kubenswrapper[4776]: I1011 10:46:19.819913 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 10:46:19.820661 master-2 kubenswrapper[4776]: I1011 10:46:19.820630 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 10:46:19.820905 master-2 kubenswrapper[4776]: I1011 10:46:19.820877 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 10:46:19.821083 master-2 kubenswrapper[4776]: I1011 10:46:19.821051 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-lntq9" Oct 11 10:46:19.821184 master-2 kubenswrapper[4776]: I1011 10:46:19.821158 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 10:46:19.836187 master-2 kubenswrapper[4776]: I1011 10:46:19.836128 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 10:46:19.848534 master-2 kubenswrapper[4776]: I1011 10:46:19.848458 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/350b6f3e-a23f-426b-9923-b2a09914e0cb-audit-dir\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.848731 master-2 kubenswrapper[4776]: I1011 10:46:19.848539 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-trusted-ca-bundle\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.848731 master-2 kubenswrapper[4776]: I1011 10:46:19.848690 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-serving-cert\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.848893 master-2 kubenswrapper[4776]: I1011 10:46:19.848840 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-image-import-ca\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.848965 master-2 kubenswrapper[4776]: I1011 10:46:19.848943 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/350b6f3e-a23f-426b-9923-b2a09914e0cb-node-pullsecrets\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.849024 master-2 kubenswrapper[4776]: I1011 10:46:19.849008 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-etcd-serving-ca\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.849060 master-2 kubenswrapper[4776]: I1011 10:46:19.849043 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-etcd-client\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.849121 master-2 kubenswrapper[4776]: I1011 10:46:19.849067 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-encryption-config\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.849208 master-2 kubenswrapper[4776]: I1011 10:46:19.849185 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lc8w\" (UniqueName: \"kubernetes.io/projected/350b6f3e-a23f-426b-9923-b2a09914e0cb-kube-api-access-4lc8w\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.849264 master-2 kubenswrapper[4776]: I1011 10:46:19.849223 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-config\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.849558 master-2 kubenswrapper[4776]: I1011 10:46:19.849503 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-audit\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.850035 master-2 kubenswrapper[4776]: I1011 10:46:19.849967 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-8865994fd-5kbfp"] Oct 11 10:46:19.950623 master-2 kubenswrapper[4776]: I1011 10:46:19.950559 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/350b6f3e-a23f-426b-9923-b2a09914e0cb-audit-dir\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.950623 master-2 kubenswrapper[4776]: I1011 10:46:19.950614 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-trusted-ca-bundle\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.950623 master-2 kubenswrapper[4776]: I1011 10:46:19.950639 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-serving-cert\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950685 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-image-import-ca\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950697 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/350b6f3e-a23f-426b-9923-b2a09914e0cb-audit-dir\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950713 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/350b6f3e-a23f-426b-9923-b2a09914e0cb-node-pullsecrets\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950775 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/350b6f3e-a23f-426b-9923-b2a09914e0cb-node-pullsecrets\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950786 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-etcd-serving-ca\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950811 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-etcd-client\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950832 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-encryption-config\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950872 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lc8w\" (UniqueName: \"kubernetes.io/projected/350b6f3e-a23f-426b-9923-b2a09914e0cb-kube-api-access-4lc8w\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950895 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-config\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951082 master-2 kubenswrapper[4776]: I1011 10:46:19.950935 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-audit\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.951883 master-2 kubenswrapper[4776]: I1011 10:46:19.951842 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-image-import-ca\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.952281 master-2 kubenswrapper[4776]: I1011 10:46:19.952248 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-etcd-serving-ca\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.952521 master-2 kubenswrapper[4776]: I1011 10:46:19.952476 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-audit\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.952668 master-2 kubenswrapper[4776]: I1011 10:46:19.952611 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-config\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.952897 master-2 kubenswrapper[4776]: I1011 10:46:19.952843 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/350b6f3e-a23f-426b-9923-b2a09914e0cb-trusted-ca-bundle\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.954636 master-2 kubenswrapper[4776]: I1011 10:46:19.954598 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-encryption-config\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.955394 master-2 kubenswrapper[4776]: I1011 10:46:19.955346 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-serving-cert\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.959902 master-2 kubenswrapper[4776]: I1011 10:46:19.959852 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/350b6f3e-a23f-426b-9923-b2a09914e0cb-etcd-client\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:19.982873 master-2 kubenswrapper[4776]: I1011 10:46:19.981966 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lc8w\" (UniqueName: \"kubernetes.io/projected/350b6f3e-a23f-426b-9923-b2a09914e0cb-kube-api-access-4lc8w\") pod \"apiserver-8865994fd-5kbfp\" (UID: \"350b6f3e-a23f-426b-9923-b2a09914e0cb\") " pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:20.131771 master-2 kubenswrapper[4776]: I1011 10:46:20.131555 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:20.171923 master-2 kubenswrapper[4776]: I1011 10:46:20.171842 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:20.574491 master-2 kubenswrapper[4776]: I1011 10:46:20.574434 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-8865994fd-5kbfp"] Oct 11 10:46:20.903662 master-2 kubenswrapper[4776]: I1011 10:46:20.903544 4776 generic.go:334] "Generic (PLEG): container finished" podID="350b6f3e-a23f-426b-9923-b2a09914e0cb" containerID="b89b5c0840c738481e8e36f4beb28c4d82117a8861c89ed1a8e99de967b8d99a" exitCode=0 Oct 11 10:46:20.904315 master-2 kubenswrapper[4776]: I1011 10:46:20.904265 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" event={"ID":"350b6f3e-a23f-426b-9923-b2a09914e0cb","Type":"ContainerDied","Data":"b89b5c0840c738481e8e36f4beb28c4d82117a8861c89ed1a8e99de967b8d99a"} Oct 11 10:46:20.904428 master-2 kubenswrapper[4776]: I1011 10:46:20.904410 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" event={"ID":"350b6f3e-a23f-426b-9923-b2a09914e0cb","Type":"ContainerStarted","Data":"eaf5f41a40813524659e8199b72ad73238bd49163bb61383dd7e4e2fcc924558"} Oct 11 10:46:21.913942 master-2 kubenswrapper[4776]: I1011 10:46:21.913869 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" event={"ID":"350b6f3e-a23f-426b-9923-b2a09914e0cb","Type":"ContainerStarted","Data":"2c4f4cf15af5c8df0b2d019e445fdf8dff21c6eb5222aff022c05b698c569fd0"} Oct 11 10:46:21.913942 master-2 kubenswrapper[4776]: I1011 10:46:21.913915 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" event={"ID":"350b6f3e-a23f-426b-9923-b2a09914e0cb","Type":"ContainerStarted","Data":"87c8825568cdfb6d2376cb5c391bb4c4ab0b3ce29e33cc8ad53c772cb1884816"} Oct 11 10:46:21.958457 master-2 kubenswrapper[4776]: I1011 10:46:21.958349 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" podStartSLOduration=122.958326831 podStartE2EDuration="2m2.958326831s" podCreationTimestamp="2025-10-11 10:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:46:21.957025336 +0000 UTC m=+1216.741452085" watchObservedRunningTime="2025-10-11 10:46:21.958326831 +0000 UTC m=+1216.742753560" Oct 11 10:46:25.133132 master-2 kubenswrapper[4776]: I1011 10:46:25.133061 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:25.133132 master-2 kubenswrapper[4776]: I1011 10:46:25.133134 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:25.142925 master-2 kubenswrapper[4776]: I1011 10:46:25.142884 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:25.959895 master-2 kubenswrapper[4776]: I1011 10:46:25.959813 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-8865994fd-5kbfp" Oct 11 10:46:29.140463 master-2 kubenswrapper[4776]: I1011 10:46:29.140265 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-2" Oct 11 10:46:30.188574 master-2 kubenswrapper[4776]: I1011 10:46:30.188515 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:30.205003 master-2 kubenswrapper[4776]: I1011 10:46:30.204946 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-2" Oct 11 10:46:39.273862 master-2 kubenswrapper[4776]: I1011 10:46:39.273635 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/revision-pruner-10-master-2"] Oct 11 10:46:39.275153 master-2 kubenswrapper[4776]: I1011 10:46:39.275113 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.279336 master-2 kubenswrapper[4776]: I1011 10:46:39.279273 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xbqxb" Oct 11 10:46:39.296533 master-2 kubenswrapper[4776]: I1011 10:46:39.296462 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/revision-pruner-10-master-2"] Oct 11 10:46:39.313432 master-2 kubenswrapper[4776]: I1011 10:46:39.313380 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kubelet-dir\") pod \"revision-pruner-10-master-2\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.314427 master-2 kubenswrapper[4776]: I1011 10:46:39.314265 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kube-api-access\") pod \"revision-pruner-10-master-2\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.415822 master-2 kubenswrapper[4776]: I1011 10:46:39.415747 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kubelet-dir\") pod \"revision-pruner-10-master-2\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.416047 master-2 kubenswrapper[4776]: I1011 10:46:39.415859 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kube-api-access\") pod \"revision-pruner-10-master-2\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.416047 master-2 kubenswrapper[4776]: I1011 10:46:39.415874 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kubelet-dir\") pod \"revision-pruner-10-master-2\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.449182 master-2 kubenswrapper[4776]: I1011 10:46:39.449087 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kube-api-access\") pod \"revision-pruner-10-master-2\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:39.608308 master-2 kubenswrapper[4776]: I1011 10:46:39.608171 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:40.002629 master-2 kubenswrapper[4776]: I1011 10:46:40.002545 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/revision-pruner-10-master-2"] Oct 11 10:46:40.011057 master-2 kubenswrapper[4776]: W1011 10:46:40.011004 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc2320cb4_bf2c_4d63_b9c6_5a7461a547e8.slice/crio-4fd1252f9240022e9bbde7d90d6240b8481dae2c9c680177ca2d6dbdaa850aa7 WatchSource:0}: Error finding container 4fd1252f9240022e9bbde7d90d6240b8481dae2c9c680177ca2d6dbdaa850aa7: Status 404 returned error can't find the container with id 4fd1252f9240022e9bbde7d90d6240b8481dae2c9c680177ca2d6dbdaa850aa7 Oct 11 10:46:40.038508 master-2 kubenswrapper[4776]: I1011 10:46:40.038432 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-2" event={"ID":"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8","Type":"ContainerStarted","Data":"4fd1252f9240022e9bbde7d90d6240b8481dae2c9c680177ca2d6dbdaa850aa7"} Oct 11 10:46:40.707798 master-2 kubenswrapper[4776]: I1011 10:46:40.707731 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-3-master-2"] Oct 11 10:46:40.724082 master-2 kubenswrapper[4776]: I1011 10:46:40.723999 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-3-master-2"] Oct 11 10:46:41.045099 master-2 kubenswrapper[4776]: I1011 10:46:41.045027 4776 generic.go:334] "Generic (PLEG): container finished" podID="c2320cb4-bf2c-4d63-b9c6-5a7461a547e8" containerID="c49554e1efe6551f4ec98c6ddaa43e072c8f37bf32235007b5d3b96cf0462be4" exitCode=0 Oct 11 10:46:41.045099 master-2 kubenswrapper[4776]: I1011 10:46:41.045078 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-2" event={"ID":"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8","Type":"ContainerDied","Data":"c49554e1efe6551f4ec98c6ddaa43e072c8f37bf32235007b5d3b96cf0462be4"} Oct 11 10:46:42.067355 master-2 kubenswrapper[4776]: I1011 10:46:42.067255 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff524bb0-602a-4579-bac9-c3f5c19ec9ba" path="/var/lib/kubelet/pods/ff524bb0-602a-4579-bac9-c3f5c19ec9ba/volumes" Oct 11 10:46:42.366284 master-2 kubenswrapper[4776]: I1011 10:46:42.366244 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:46:42.458057 master-2 kubenswrapper[4776]: I1011 10:46:42.457984 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kube-api-access\") pod \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " Oct 11 10:46:42.458276 master-2 kubenswrapper[4776]: I1011 10:46:42.458125 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kubelet-dir\") pod \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\" (UID: \"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8\") " Oct 11 10:46:42.458499 master-2 kubenswrapper[4776]: I1011 10:46:42.458469 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c2320cb4-bf2c-4d63-b9c6-5a7461a547e8" (UID: "c2320cb4-bf2c-4d63-b9c6-5a7461a547e8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:46:42.461363 master-2 kubenswrapper[4776]: I1011 10:46:42.461219 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c2320cb4-bf2c-4d63-b9c6-5a7461a547e8" (UID: "c2320cb4-bf2c-4d63-b9c6-5a7461a547e8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:46:42.560604 master-2 kubenswrapper[4776]: I1011 10:46:42.560497 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:42.560604 master-2 kubenswrapper[4776]: I1011 10:46:42.560566 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2320cb4-bf2c-4d63-b9c6-5a7461a547e8-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:46:43.061251 master-2 kubenswrapper[4776]: I1011 10:46:43.061190 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-2" event={"ID":"c2320cb4-bf2c-4d63-b9c6-5a7461a547e8","Type":"ContainerDied","Data":"4fd1252f9240022e9bbde7d90d6240b8481dae2c9c680177ca2d6dbdaa850aa7"} Oct 11 10:46:43.061251 master-2 kubenswrapper[4776]: I1011 10:46:43.061236 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fd1252f9240022e9bbde7d90d6240b8481dae2c9c680177ca2d6dbdaa850aa7" Oct 11 10:46:43.061251 master-2 kubenswrapper[4776]: I1011 10:46:43.061239 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-2" Oct 11 10:47:07.830830 master-2 kubenswrapper[4776]: I1011 10:47:07.830771 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7"] Oct 11 10:47:07.831560 master-2 kubenswrapper[4776]: E1011 10:47:07.831114 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2320cb4-bf2c-4d63-b9c6-5a7461a547e8" containerName="pruner" Oct 11 10:47:07.831560 master-2 kubenswrapper[4776]: I1011 10:47:07.831148 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2320cb4-bf2c-4d63-b9c6-5a7461a547e8" containerName="pruner" Oct 11 10:47:07.831560 master-2 kubenswrapper[4776]: I1011 10:47:07.831314 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2320cb4-bf2c-4d63-b9c6-5a7461a547e8" containerName="pruner" Oct 11 10:47:07.832641 master-2 kubenswrapper[4776]: I1011 10:47:07.832612 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:07.847297 master-2 kubenswrapper[4776]: I1011 10:47:07.847255 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7"] Oct 11 10:47:07.923132 master-2 kubenswrapper[4776]: I1011 10:47:07.923068 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5rwd\" (UniqueName: \"kubernetes.io/projected/4df9c769-cf84-4934-a70d-16984666e6ed-kube-api-access-g5rwd\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:07.923397 master-2 kubenswrapper[4776]: I1011 10:47:07.923216 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-util\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:07.923397 master-2 kubenswrapper[4776]: I1011 10:47:07.923262 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-bundle\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.024358 master-2 kubenswrapper[4776]: I1011 10:47:08.024301 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-util\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.024358 master-2 kubenswrapper[4776]: I1011 10:47:08.024360 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-bundle\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.024662 master-2 kubenswrapper[4776]: I1011 10:47:08.024396 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5rwd\" (UniqueName: \"kubernetes.io/projected/4df9c769-cf84-4934-a70d-16984666e6ed-kube-api-access-g5rwd\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.025399 master-2 kubenswrapper[4776]: I1011 10:47:08.025365 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-util\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.025537 master-2 kubenswrapper[4776]: I1011 10:47:08.025403 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-bundle\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.048879 master-2 kubenswrapper[4776]: I1011 10:47:08.048815 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5rwd\" (UniqueName: \"kubernetes.io/projected/4df9c769-cf84-4934-a70d-16984666e6ed-kube-api-access-g5rwd\") pod \"4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.147577 master-2 kubenswrapper[4776]: I1011 10:47:08.147475 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:08.552151 master-2 kubenswrapper[4776]: I1011 10:47:08.552105 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7"] Oct 11 10:47:08.555932 master-2 kubenswrapper[4776]: W1011 10:47:08.555840 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4df9c769_cf84_4934_a70d_16984666e6ed.slice/crio-2602dd240acd49965323b80f85f1cc979e0ca998ea8a783b4482ae921a906b15 WatchSource:0}: Error finding container 2602dd240acd49965323b80f85f1cc979e0ca998ea8a783b4482ae921a906b15: Status 404 returned error can't find the container with id 2602dd240acd49965323b80f85f1cc979e0ca998ea8a783b4482ae921a906b15 Oct 11 10:47:09.243635 master-2 kubenswrapper[4776]: I1011 10:47:09.243577 4776 generic.go:334] "Generic (PLEG): container finished" podID="4df9c769-cf84-4934-a70d-16984666e6ed" containerID="686b001d4c14f06bbecf2081fdd73a0a5e1b061794e316c9201bdf61e6e0037e" exitCode=0 Oct 11 10:47:09.243635 master-2 kubenswrapper[4776]: I1011 10:47:09.243630 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" event={"ID":"4df9c769-cf84-4934-a70d-16984666e6ed","Type":"ContainerDied","Data":"686b001d4c14f06bbecf2081fdd73a0a5e1b061794e316c9201bdf61e6e0037e"} Oct 11 10:47:09.244300 master-2 kubenswrapper[4776]: I1011 10:47:09.243658 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" event={"ID":"4df9c769-cf84-4934-a70d-16984666e6ed","Type":"ContainerStarted","Data":"2602dd240acd49965323b80f85f1cc979e0ca998ea8a783b4482ae921a906b15"} Oct 11 10:47:11.263029 master-2 kubenswrapper[4776]: I1011 10:47:11.262099 4776 generic.go:334] "Generic (PLEG): container finished" podID="4df9c769-cf84-4934-a70d-16984666e6ed" containerID="a8eb4ac91d75a294c570a3b36d660bcf024c2cc21a78faca1e93e878d79a935a" exitCode=0 Oct 11 10:47:11.263029 master-2 kubenswrapper[4776]: I1011 10:47:11.262153 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" event={"ID":"4df9c769-cf84-4934-a70d-16984666e6ed","Type":"ContainerDied","Data":"a8eb4ac91d75a294c570a3b36d660bcf024c2cc21a78faca1e93e878d79a935a"} Oct 11 10:47:12.275028 master-2 kubenswrapper[4776]: I1011 10:47:12.274947 4776 generic.go:334] "Generic (PLEG): container finished" podID="4df9c769-cf84-4934-a70d-16984666e6ed" containerID="747e01e8adbde76ee6c3e92fe622c0624a737296a2066cf73dd9e691e2e9cd6f" exitCode=0 Oct 11 10:47:12.275028 master-2 kubenswrapper[4776]: I1011 10:47:12.275001 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" event={"ID":"4df9c769-cf84-4934-a70d-16984666e6ed","Type":"ContainerDied","Data":"747e01e8adbde76ee6c3e92fe622c0624a737296a2066cf73dd9e691e2e9cd6f"} Oct 11 10:47:13.596589 master-2 kubenswrapper[4776]: I1011 10:47:13.596525 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:13.797348 master-2 kubenswrapper[4776]: I1011 10:47:13.797230 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-util\") pod \"4df9c769-cf84-4934-a70d-16984666e6ed\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " Oct 11 10:47:13.797517 master-2 kubenswrapper[4776]: I1011 10:47:13.797379 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-bundle\") pod \"4df9c769-cf84-4934-a70d-16984666e6ed\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " Oct 11 10:47:13.797517 master-2 kubenswrapper[4776]: I1011 10:47:13.797459 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5rwd\" (UniqueName: \"kubernetes.io/projected/4df9c769-cf84-4934-a70d-16984666e6ed-kube-api-access-g5rwd\") pod \"4df9c769-cf84-4934-a70d-16984666e6ed\" (UID: \"4df9c769-cf84-4934-a70d-16984666e6ed\") " Oct 11 10:47:13.798239 master-2 kubenswrapper[4776]: I1011 10:47:13.798203 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-bundle" (OuterVolumeSpecName: "bundle") pod "4df9c769-cf84-4934-a70d-16984666e6ed" (UID: "4df9c769-cf84-4934-a70d-16984666e6ed"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:13.800196 master-2 kubenswrapper[4776]: I1011 10:47:13.800167 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df9c769-cf84-4934-a70d-16984666e6ed-kube-api-access-g5rwd" (OuterVolumeSpecName: "kube-api-access-g5rwd") pod "4df9c769-cf84-4934-a70d-16984666e6ed" (UID: "4df9c769-cf84-4934-a70d-16984666e6ed"). InnerVolumeSpecName "kube-api-access-g5rwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:13.812776 master-2 kubenswrapper[4776]: I1011 10:47:13.811626 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-util" (OuterVolumeSpecName: "util") pod "4df9c769-cf84-4934-a70d-16984666e6ed" (UID: "4df9c769-cf84-4934-a70d-16984666e6ed"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:13.899286 master-2 kubenswrapper[4776]: I1011 10:47:13.899215 4776 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:13.899286 master-2 kubenswrapper[4776]: I1011 10:47:13.899253 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5rwd\" (UniqueName: \"kubernetes.io/projected/4df9c769-cf84-4934-a70d-16984666e6ed-kube-api-access-g5rwd\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:13.899286 master-2 kubenswrapper[4776]: I1011 10:47:13.899265 4776 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4df9c769-cf84-4934-a70d-16984666e6ed-util\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:14.288904 master-2 kubenswrapper[4776]: I1011 10:47:14.288820 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" event={"ID":"4df9c769-cf84-4934-a70d-16984666e6ed","Type":"ContainerDied","Data":"2602dd240acd49965323b80f85f1cc979e0ca998ea8a783b4482ae921a906b15"} Oct 11 10:47:14.288904 master-2 kubenswrapper[4776]: I1011 10:47:14.288869 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2602dd240acd49965323b80f85f1cc979e0ca998ea8a783b4482ae921a906b15" Oct 11 10:47:14.288904 master-2 kubenswrapper[4776]: I1011 10:47:14.288871 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7" Oct 11 10:47:15.520355 master-2 kubenswrapper[4776]: I1011 10:47:15.520289 4776 scope.go:117] "RemoveContainer" containerID="5dcd1c043c2c18cfa07bcf1cba0c1e16ed116132974cf974809ac324fe8a6c21" Oct 11 10:47:34.519278 master-2 kubenswrapper[4776]: I1011 10:47:34.519213 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf"] Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: E1011 10:47:34.519478 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="pull" Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: I1011 10:47:34.519492 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="pull" Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: E1011 10:47:34.519505 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="util" Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: I1011 10:47:34.519513 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="util" Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: E1011 10:47:34.519529 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="extract" Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: I1011 10:47:34.519536 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="extract" Oct 11 10:47:34.519860 master-2 kubenswrapper[4776]: I1011 10:47:34.519649 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df9c769-cf84-4934-a70d-16984666e6ed" containerName="extract" Oct 11 10:47:34.520607 master-2 kubenswrapper[4776]: I1011 10:47:34.520558 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.536971 master-2 kubenswrapper[4776]: I1011 10:47:34.536920 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf"] Oct 11 10:47:34.598745 master-2 kubenswrapper[4776]: I1011 10:47:34.598262 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.598745 master-2 kubenswrapper[4776]: I1011 10:47:34.598336 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.598745 master-2 kubenswrapper[4776]: I1011 10:47:34.598406 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8jv9\" (UniqueName: \"kubernetes.io/projected/88274ab8-e9fc-466f-a1c0-a4f210d7beae-kube-api-access-q8jv9\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.699756 master-2 kubenswrapper[4776]: I1011 10:47:34.699657 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.699756 master-2 kubenswrapper[4776]: I1011 10:47:34.699759 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8jv9\" (UniqueName: \"kubernetes.io/projected/88274ab8-e9fc-466f-a1c0-a4f210d7beae-kube-api-access-q8jv9\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.700106 master-2 kubenswrapper[4776]: I1011 10:47:34.699810 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.700329 master-2 kubenswrapper[4776]: I1011 10:47:34.700269 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.700329 master-2 kubenswrapper[4776]: I1011 10:47:34.700308 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.731319 master-2 kubenswrapper[4776]: I1011 10:47:34.731260 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8jv9\" (UniqueName: \"kubernetes.io/projected/88274ab8-e9fc-466f-a1c0-a4f210d7beae-kube-api-access-q8jv9\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.835167 master-2 kubenswrapper[4776]: I1011 10:47:34.834754 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:34.930706 master-2 kubenswrapper[4776]: I1011 10:47:34.926911 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6"] Oct 11 10:47:34.930706 master-2 kubenswrapper[4776]: I1011 10:47:34.928558 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:34.951628 master-2 kubenswrapper[4776]: I1011 10:47:34.951573 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6"] Oct 11 10:47:35.010368 master-2 kubenswrapper[4776]: I1011 10:47:35.010312 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.010368 master-2 kubenswrapper[4776]: I1011 10:47:35.010369 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.010585 master-2 kubenswrapper[4776]: I1011 10:47:35.010511 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzl5z\" (UniqueName: \"kubernetes.io/projected/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-kube-api-access-nzl5z\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.111961 master-2 kubenswrapper[4776]: I1011 10:47:35.111823 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.111961 master-2 kubenswrapper[4776]: I1011 10:47:35.111891 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.111961 master-2 kubenswrapper[4776]: I1011 10:47:35.111942 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzl5z\" (UniqueName: \"kubernetes.io/projected/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-kube-api-access-nzl5z\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.112413 master-2 kubenswrapper[4776]: I1011 10:47:35.112373 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.112604 master-2 kubenswrapper[4776]: I1011 10:47:35.112574 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.135345 master-2 kubenswrapper[4776]: I1011 10:47:35.135295 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzl5z\" (UniqueName: \"kubernetes.io/projected/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-kube-api-access-nzl5z\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.249779 master-2 kubenswrapper[4776]: I1011 10:47:35.249617 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf"] Oct 11 10:47:35.251437 master-2 kubenswrapper[4776]: I1011 10:47:35.251382 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:35.415475 master-2 kubenswrapper[4776]: I1011 10:47:35.415409 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" event={"ID":"88274ab8-e9fc-466f-a1c0-a4f210d7beae","Type":"ContainerStarted","Data":"db3f09da0741b1b6979e52de550944c4e47d224e0a7c3f5306dc4940c5314a6e"} Oct 11 10:47:35.415475 master-2 kubenswrapper[4776]: I1011 10:47:35.415466 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" event={"ID":"88274ab8-e9fc-466f-a1c0-a4f210d7beae","Type":"ContainerStarted","Data":"075be6091c656d0be6436a71f3518fd2ae48c009d5a01ed8baceb70ed015d7d7"} Oct 11 10:47:35.748827 master-2 kubenswrapper[4776]: W1011 10:47:35.748752 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27ad118d_adf9_4bbb_93ca_a7ca0e52a1bf.slice/crio-f7e8a85b65f5b64c4c9e99652ee47c22fdcbb69827a1aac046883437537f5a0d WatchSource:0}: Error finding container f7e8a85b65f5b64c4c9e99652ee47c22fdcbb69827a1aac046883437537f5a0d: Status 404 returned error can't find the container with id f7e8a85b65f5b64c4c9e99652ee47c22fdcbb69827a1aac046883437537f5a0d Oct 11 10:47:35.749286 master-2 kubenswrapper[4776]: I1011 10:47:35.748928 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6"] Oct 11 10:47:36.321824 master-2 kubenswrapper[4776]: I1011 10:47:36.321732 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt"] Oct 11 10:47:36.323201 master-2 kubenswrapper[4776]: I1011 10:47:36.323148 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.337121 master-2 kubenswrapper[4776]: I1011 10:47:36.337062 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt"] Oct 11 10:47:36.425201 master-2 kubenswrapper[4776]: I1011 10:47:36.425135 4776 generic.go:334] "Generic (PLEG): container finished" podID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerID="28f1ad4c7e54c6e7bae910d8098c0f170760e51d44fa824bf874ee757107cfaf" exitCode=0 Oct 11 10:47:36.425450 master-2 kubenswrapper[4776]: I1011 10:47:36.425215 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" event={"ID":"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf","Type":"ContainerDied","Data":"28f1ad4c7e54c6e7bae910d8098c0f170760e51d44fa824bf874ee757107cfaf"} Oct 11 10:47:36.425450 master-2 kubenswrapper[4776]: I1011 10:47:36.425247 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" event={"ID":"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf","Type":"ContainerStarted","Data":"f7e8a85b65f5b64c4c9e99652ee47c22fdcbb69827a1aac046883437537f5a0d"} Oct 11 10:47:36.427047 master-2 kubenswrapper[4776]: I1011 10:47:36.427019 4776 generic.go:334] "Generic (PLEG): container finished" podID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerID="db3f09da0741b1b6979e52de550944c4e47d224e0a7c3f5306dc4940c5314a6e" exitCode=0 Oct 11 10:47:36.427047 master-2 kubenswrapper[4776]: I1011 10:47:36.427052 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" event={"ID":"88274ab8-e9fc-466f-a1c0-a4f210d7beae","Type":"ContainerDied","Data":"db3f09da0741b1b6979e52de550944c4e47d224e0a7c3f5306dc4940c5314a6e"} Oct 11 10:47:36.428464 master-2 kubenswrapper[4776]: I1011 10:47:36.428405 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdf9m\" (UniqueName: \"kubernetes.io/projected/fb231c9e-66e8-4fdf-870d-a927418a72fa-kube-api-access-mdf9m\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.428464 master-2 kubenswrapper[4776]: I1011 10:47:36.428451 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.428606 master-2 kubenswrapper[4776]: I1011 10:47:36.428476 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.529747 master-2 kubenswrapper[4776]: I1011 10:47:36.529664 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdf9m\" (UniqueName: \"kubernetes.io/projected/fb231c9e-66e8-4fdf-870d-a927418a72fa-kube-api-access-mdf9m\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.529747 master-2 kubenswrapper[4776]: I1011 10:47:36.529749 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.530283 master-2 kubenswrapper[4776]: I1011 10:47:36.529797 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.530509 master-2 kubenswrapper[4776]: I1011 10:47:36.530416 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.530509 master-2 kubenswrapper[4776]: I1011 10:47:36.530469 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.567426 master-2 kubenswrapper[4776]: I1011 10:47:36.567308 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdf9m\" (UniqueName: \"kubernetes.io/projected/fb231c9e-66e8-4fdf-870d-a927418a72fa-kube-api-access-mdf9m\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:36.646640 master-2 kubenswrapper[4776]: I1011 10:47:36.646362 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:37.105491 master-2 kubenswrapper[4776]: I1011 10:47:37.105445 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt"] Oct 11 10:47:37.108766 master-2 kubenswrapper[4776]: W1011 10:47:37.107986 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb231c9e_66e8_4fdf_870d_a927418a72fa.slice/crio-60ed1a00a20b67f1956ea0a0a0b32d98fa346f6c776f5458abb7274705bca8fa WatchSource:0}: Error finding container 60ed1a00a20b67f1956ea0a0a0b32d98fa346f6c776f5458abb7274705bca8fa: Status 404 returned error can't find the container with id 60ed1a00a20b67f1956ea0a0a0b32d98fa346f6c776f5458abb7274705bca8fa Oct 11 10:47:37.434751 master-2 kubenswrapper[4776]: I1011 10:47:37.434435 4776 generic.go:334] "Generic (PLEG): container finished" podID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerID="39d64073eca44763d4810b05db5d70ffbca978f133a7a44b8ba9480c3da9a335" exitCode=0 Oct 11 10:47:37.434751 master-2 kubenswrapper[4776]: I1011 10:47:37.434507 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" event={"ID":"fb231c9e-66e8-4fdf-870d-a927418a72fa","Type":"ContainerDied","Data":"39d64073eca44763d4810b05db5d70ffbca978f133a7a44b8ba9480c3da9a335"} Oct 11 10:47:37.435047 master-2 kubenswrapper[4776]: I1011 10:47:37.434769 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" event={"ID":"fb231c9e-66e8-4fdf-870d-a927418a72fa","Type":"ContainerStarted","Data":"60ed1a00a20b67f1956ea0a0a0b32d98fa346f6c776f5458abb7274705bca8fa"} Oct 11 10:47:38.442449 master-2 kubenswrapper[4776]: I1011 10:47:38.442351 4776 generic.go:334] "Generic (PLEG): container finished" podID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerID="1020de00cf93f1e2ffe9bd58d8cb1276ac5e65643d39d4776195df14f3677e41" exitCode=0 Oct 11 10:47:38.442449 master-2 kubenswrapper[4776]: I1011 10:47:38.442414 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" event={"ID":"88274ab8-e9fc-466f-a1c0-a4f210d7beae","Type":"ContainerDied","Data":"1020de00cf93f1e2ffe9bd58d8cb1276ac5e65643d39d4776195df14f3677e41"} Oct 11 10:47:39.451712 master-2 kubenswrapper[4776]: I1011 10:47:39.451647 4776 generic.go:334] "Generic (PLEG): container finished" podID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerID="80b24e95c365c5820128ebd64c113cb5de8b49ca2df72b3c7182cc6b16ad2cf8" exitCode=0 Oct 11 10:47:39.452028 master-2 kubenswrapper[4776]: I1011 10:47:39.451715 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" event={"ID":"88274ab8-e9fc-466f-a1c0-a4f210d7beae","Type":"ContainerDied","Data":"80b24e95c365c5820128ebd64c113cb5de8b49ca2df72b3c7182cc6b16ad2cf8"} Oct 11 10:47:40.466407 master-2 kubenswrapper[4776]: I1011 10:47:40.466363 4776 generic.go:334] "Generic (PLEG): container finished" podID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerID="f2214ae6145487a3745153c956da3ff5e1e01b6136fad0863abb18dad0873bbd" exitCode=0 Oct 11 10:47:40.467343 master-2 kubenswrapper[4776]: I1011 10:47:40.466418 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" event={"ID":"fb231c9e-66e8-4fdf-870d-a927418a72fa","Type":"ContainerDied","Data":"f2214ae6145487a3745153c956da3ff5e1e01b6136fad0863abb18dad0873bbd"} Oct 11 10:47:40.468602 master-2 kubenswrapper[4776]: I1011 10:47:40.468522 4776 generic.go:334] "Generic (PLEG): container finished" podID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerID="ded3f00970be8ea1637c5f85c665806c5a28fbc3cd6e930abf54671fefb96c09" exitCode=0 Oct 11 10:47:40.468760 master-2 kubenswrapper[4776]: I1011 10:47:40.468729 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" event={"ID":"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf","Type":"ContainerDied","Data":"ded3f00970be8ea1637c5f85c665806c5a28fbc3cd6e930abf54671fefb96c09"} Oct 11 10:47:40.986617 master-2 kubenswrapper[4776]: I1011 10:47:40.986583 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:41.092962 master-2 kubenswrapper[4776]: I1011 10:47:41.092841 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8jv9\" (UniqueName: \"kubernetes.io/projected/88274ab8-e9fc-466f-a1c0-a4f210d7beae-kube-api-access-q8jv9\") pod \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " Oct 11 10:47:41.092962 master-2 kubenswrapper[4776]: I1011 10:47:41.092961 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-util\") pod \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " Oct 11 10:47:41.093145 master-2 kubenswrapper[4776]: I1011 10:47:41.093043 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-bundle\") pod \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\" (UID: \"88274ab8-e9fc-466f-a1c0-a4f210d7beae\") " Oct 11 10:47:41.094079 master-2 kubenswrapper[4776]: I1011 10:47:41.094025 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-bundle" (OuterVolumeSpecName: "bundle") pod "88274ab8-e9fc-466f-a1c0-a4f210d7beae" (UID: "88274ab8-e9fc-466f-a1c0-a4f210d7beae"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:41.095621 master-2 kubenswrapper[4776]: I1011 10:47:41.095555 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88274ab8-e9fc-466f-a1c0-a4f210d7beae-kube-api-access-q8jv9" (OuterVolumeSpecName: "kube-api-access-q8jv9") pod "88274ab8-e9fc-466f-a1c0-a4f210d7beae" (UID: "88274ab8-e9fc-466f-a1c0-a4f210d7beae"). InnerVolumeSpecName "kube-api-access-q8jv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:41.102497 master-2 kubenswrapper[4776]: I1011 10:47:41.102444 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-util" (OuterVolumeSpecName: "util") pod "88274ab8-e9fc-466f-a1c0-a4f210d7beae" (UID: "88274ab8-e9fc-466f-a1c0-a4f210d7beae"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:41.194695 master-2 kubenswrapper[4776]: I1011 10:47:41.194621 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8jv9\" (UniqueName: \"kubernetes.io/projected/88274ab8-e9fc-466f-a1c0-a4f210d7beae-kube-api-access-q8jv9\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:41.194964 master-2 kubenswrapper[4776]: I1011 10:47:41.194899 4776 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-util\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:41.195631 master-2 kubenswrapper[4776]: I1011 10:47:41.195537 4776 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88274ab8-e9fc-466f-a1c0-a4f210d7beae-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:41.479861 master-2 kubenswrapper[4776]: I1011 10:47:41.479785 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" event={"ID":"88274ab8-e9fc-466f-a1c0-a4f210d7beae","Type":"ContainerDied","Data":"075be6091c656d0be6436a71f3518fd2ae48c009d5a01ed8baceb70ed015d7d7"} Oct 11 10:47:41.479861 master-2 kubenswrapper[4776]: I1011 10:47:41.479847 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="075be6091c656d0be6436a71f3518fd2ae48c009d5a01ed8baceb70ed015d7d7" Oct 11 10:47:41.480438 master-2 kubenswrapper[4776]: I1011 10:47:41.479959 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf" Oct 11 10:47:41.484780 master-2 kubenswrapper[4776]: I1011 10:47:41.484131 4776 generic.go:334] "Generic (PLEG): container finished" podID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerID="3a4d994b947ef75b2bb0de7cf22273e6998d07e99740c2f7d03761a1aca1861b" exitCode=0 Oct 11 10:47:41.484780 master-2 kubenswrapper[4776]: I1011 10:47:41.484195 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" event={"ID":"fb231c9e-66e8-4fdf-870d-a927418a72fa","Type":"ContainerDied","Data":"3a4d994b947ef75b2bb0de7cf22273e6998d07e99740c2f7d03761a1aca1861b"} Oct 11 10:47:41.486443 master-2 kubenswrapper[4776]: I1011 10:47:41.486420 4776 generic.go:334] "Generic (PLEG): container finished" podID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerID="fbf19dd452fce4effe941a49a87f63dc6de0bf44c956630b2ea2b9f4396e7f36" exitCode=0 Oct 11 10:47:41.486507 master-2 kubenswrapper[4776]: I1011 10:47:41.486452 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" event={"ID":"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf","Type":"ContainerDied","Data":"fbf19dd452fce4effe941a49a87f63dc6de0bf44c956630b2ea2b9f4396e7f36"} Oct 11 10:47:41.722989 master-2 kubenswrapper[4776]: I1011 10:47:41.722941 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l"] Oct 11 10:47:41.723396 master-2 kubenswrapper[4776]: E1011 10:47:41.723383 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="extract" Oct 11 10:47:41.723462 master-2 kubenswrapper[4776]: I1011 10:47:41.723452 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="extract" Oct 11 10:47:41.723536 master-2 kubenswrapper[4776]: E1011 10:47:41.723524 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="util" Oct 11 10:47:41.723607 master-2 kubenswrapper[4776]: I1011 10:47:41.723596 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="util" Oct 11 10:47:41.723685 master-2 kubenswrapper[4776]: E1011 10:47:41.723661 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="pull" Oct 11 10:47:41.723748 master-2 kubenswrapper[4776]: I1011 10:47:41.723739 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="pull" Oct 11 10:47:41.723909 master-2 kubenswrapper[4776]: I1011 10:47:41.723898 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="88274ab8-e9fc-466f-a1c0-a4f210d7beae" containerName="extract" Oct 11 10:47:41.725025 master-2 kubenswrapper[4776]: I1011 10:47:41.725005 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.744858 master-2 kubenswrapper[4776]: I1011 10:47:41.744646 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l"] Oct 11 10:47:41.805983 master-2 kubenswrapper[4776]: I1011 10:47:41.805781 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp5xx\" (UniqueName: \"kubernetes.io/projected/4f83a3b5-333b-4284-b03d-c03db77c3241-kube-api-access-gp5xx\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.805983 master-2 kubenswrapper[4776]: I1011 10:47:41.805891 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.805983 master-2 kubenswrapper[4776]: I1011 10:47:41.805930 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.906977 master-2 kubenswrapper[4776]: I1011 10:47:41.906869 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.906977 master-2 kubenswrapper[4776]: I1011 10:47:41.906935 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.906977 master-2 kubenswrapper[4776]: I1011 10:47:41.906991 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp5xx\" (UniqueName: \"kubernetes.io/projected/4f83a3b5-333b-4284-b03d-c03db77c3241-kube-api-access-gp5xx\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.907649 master-2 kubenswrapper[4776]: I1011 10:47:41.907601 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.907649 master-2 kubenswrapper[4776]: I1011 10:47:41.907615 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:41.941894 master-2 kubenswrapper[4776]: I1011 10:47:41.941844 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp5xx\" (UniqueName: \"kubernetes.io/projected/4f83a3b5-333b-4284-b03d-c03db77c3241-kube-api-access-gp5xx\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:42.078265 master-2 kubenswrapper[4776]: I1011 10:47:42.078074 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:42.538586 master-2 kubenswrapper[4776]: W1011 10:47:42.538526 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f83a3b5_333b_4284_b03d_c03db77c3241.slice/crio-1d3c16a3929d8110ce5e8257b370727ad2dedf5c1aea80548a6befaede03b16c WatchSource:0}: Error finding container 1d3c16a3929d8110ce5e8257b370727ad2dedf5c1aea80548a6befaede03b16c: Status 404 returned error can't find the container with id 1d3c16a3929d8110ce5e8257b370727ad2dedf5c1aea80548a6befaede03b16c Oct 11 10:47:42.542462 master-2 kubenswrapper[4776]: I1011 10:47:42.542416 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l"] Oct 11 10:47:42.852168 master-2 kubenswrapper[4776]: I1011 10:47:42.852135 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:42.869668 master-2 kubenswrapper[4776]: I1011 10:47:42.869624 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:43.022965 master-2 kubenswrapper[4776]: I1011 10:47:43.022928 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-util\") pod \"fb231c9e-66e8-4fdf-870d-a927418a72fa\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " Oct 11 10:47:43.023745 master-2 kubenswrapper[4776]: I1011 10:47:43.023728 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-util\") pod \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " Oct 11 10:47:43.024009 master-2 kubenswrapper[4776]: I1011 10:47:43.023995 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-bundle\") pod \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " Oct 11 10:47:43.024124 master-2 kubenswrapper[4776]: I1011 10:47:43.024111 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdf9m\" (UniqueName: \"kubernetes.io/projected/fb231c9e-66e8-4fdf-870d-a927418a72fa-kube-api-access-mdf9m\") pod \"fb231c9e-66e8-4fdf-870d-a927418a72fa\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " Oct 11 10:47:43.024581 master-2 kubenswrapper[4776]: I1011 10:47:43.024568 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-bundle\") pod \"fb231c9e-66e8-4fdf-870d-a927418a72fa\" (UID: \"fb231c9e-66e8-4fdf-870d-a927418a72fa\") " Oct 11 10:47:43.024706 master-2 kubenswrapper[4776]: I1011 10:47:43.024692 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzl5z\" (UniqueName: \"kubernetes.io/projected/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-kube-api-access-nzl5z\") pod \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\" (UID: \"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf\") " Oct 11 10:47:43.025041 master-2 kubenswrapper[4776]: I1011 10:47:43.024995 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-bundle" (OuterVolumeSpecName: "bundle") pod "fb231c9e-66e8-4fdf-870d-a927418a72fa" (UID: "fb231c9e-66e8-4fdf-870d-a927418a72fa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:43.025563 master-2 kubenswrapper[4776]: I1011 10:47:43.025520 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-bundle" (OuterVolumeSpecName: "bundle") pod "27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" (UID: "27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:43.027068 master-2 kubenswrapper[4776]: I1011 10:47:43.026981 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb231c9e-66e8-4fdf-870d-a927418a72fa-kube-api-access-mdf9m" (OuterVolumeSpecName: "kube-api-access-mdf9m") pod "fb231c9e-66e8-4fdf-870d-a927418a72fa" (UID: "fb231c9e-66e8-4fdf-870d-a927418a72fa"). InnerVolumeSpecName "kube-api-access-mdf9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:43.027220 master-2 kubenswrapper[4776]: I1011 10:47:43.027202 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-kube-api-access-nzl5z" (OuterVolumeSpecName: "kube-api-access-nzl5z") pod "27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" (UID: "27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf"). InnerVolumeSpecName "kube-api-access-nzl5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:43.033975 master-2 kubenswrapper[4776]: I1011 10:47:43.033947 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-util" (OuterVolumeSpecName: "util") pod "fb231c9e-66e8-4fdf-870d-a927418a72fa" (UID: "fb231c9e-66e8-4fdf-870d-a927418a72fa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:43.038225 master-2 kubenswrapper[4776]: I1011 10:47:43.037781 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-util" (OuterVolumeSpecName: "util") pod "27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" (UID: "27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:43.127101 master-2 kubenswrapper[4776]: I1011 10:47:43.126225 4776 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:43.127101 master-2 kubenswrapper[4776]: I1011 10:47:43.126445 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdf9m\" (UniqueName: \"kubernetes.io/projected/fb231c9e-66e8-4fdf-870d-a927418a72fa-kube-api-access-mdf9m\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:43.127101 master-2 kubenswrapper[4776]: I1011 10:47:43.126943 4776 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:43.127101 master-2 kubenswrapper[4776]: I1011 10:47:43.126960 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzl5z\" (UniqueName: \"kubernetes.io/projected/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-kube-api-access-nzl5z\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:43.127101 master-2 kubenswrapper[4776]: I1011 10:47:43.126974 4776 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fb231c9e-66e8-4fdf-870d-a927418a72fa-util\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:43.127101 master-2 kubenswrapper[4776]: I1011 10:47:43.126983 4776 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf-util\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:43.503968 master-2 kubenswrapper[4776]: I1011 10:47:43.503912 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" Oct 11 10:47:43.503968 master-2 kubenswrapper[4776]: I1011 10:47:43.503920 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt" event={"ID":"fb231c9e-66e8-4fdf-870d-a927418a72fa","Type":"ContainerDied","Data":"60ed1a00a20b67f1956ea0a0a0b32d98fa346f6c776f5458abb7274705bca8fa"} Oct 11 10:47:43.504228 master-2 kubenswrapper[4776]: I1011 10:47:43.503998 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60ed1a00a20b67f1956ea0a0a0b32d98fa346f6c776f5458abb7274705bca8fa" Oct 11 10:47:43.506070 master-2 kubenswrapper[4776]: I1011 10:47:43.506017 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerID="3e0e93dbf8c43d20e82487f3c320d83e0986abea8429ad92f58db09a7d7bc359" exitCode=0 Oct 11 10:47:43.506168 master-2 kubenswrapper[4776]: I1011 10:47:43.506129 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" event={"ID":"4f83a3b5-333b-4284-b03d-c03db77c3241","Type":"ContainerDied","Data":"3e0e93dbf8c43d20e82487f3c320d83e0986abea8429ad92f58db09a7d7bc359"} Oct 11 10:47:43.506212 master-2 kubenswrapper[4776]: I1011 10:47:43.506175 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" event={"ID":"4f83a3b5-333b-4284-b03d-c03db77c3241","Type":"ContainerStarted","Data":"1d3c16a3929d8110ce5e8257b370727ad2dedf5c1aea80548a6befaede03b16c"} Oct 11 10:47:43.512642 master-2 kubenswrapper[4776]: I1011 10:47:43.512589 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" event={"ID":"27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf","Type":"ContainerDied","Data":"f7e8a85b65f5b64c4c9e99652ee47c22fdcbb69827a1aac046883437537f5a0d"} Oct 11 10:47:43.512642 master-2 kubenswrapper[4776]: I1011 10:47:43.512639 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7e8a85b65f5b64c4c9e99652ee47c22fdcbb69827a1aac046883437537f5a0d" Oct 11 10:47:43.512908 master-2 kubenswrapper[4776]: I1011 10:47:43.512876 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6" Oct 11 10:47:44.530706 master-2 kubenswrapper[4776]: I1011 10:47:44.527222 4776 generic.go:334] "Generic (PLEG): container finished" podID="8757af56-20fb-439e-adba-7e4e50378936" containerID="25fc39e758a2899d86cad41cf89dd130d8c1f8d7d2271b02d90a5c1db60a0fae" exitCode=0 Oct 11 10:47:44.530706 master-2 kubenswrapper[4776]: I1011 10:47:44.527272 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-v6dfc" event={"ID":"8757af56-20fb-439e-adba-7e4e50378936","Type":"ContainerDied","Data":"25fc39e758a2899d86cad41cf89dd130d8c1f8d7d2271b02d90a5c1db60a0fae"} Oct 11 10:47:45.534717 master-2 kubenswrapper[4776]: I1011 10:47:45.534633 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerID="3527b856abee72f41d55546ce66e57c5fd282375bf2fd3d51da6cf6aa9ac8f13" exitCode=0 Oct 11 10:47:45.534717 master-2 kubenswrapper[4776]: I1011 10:47:45.534695 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" event={"ID":"4f83a3b5-333b-4284-b03d-c03db77c3241","Type":"ContainerDied","Data":"3527b856abee72f41d55546ce66e57c5fd282375bf2fd3d51da6cf6aa9ac8f13"} Oct 11 10:47:45.609230 master-2 kubenswrapper[4776]: I1011 10:47:45.609189 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:47:45.660421 master-2 kubenswrapper[4776]: I1011 10:47:45.660379 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpxlw\" (UniqueName: \"kubernetes.io/projected/8757af56-20fb-439e-adba-7e4e50378936-kube-api-access-xpxlw\") pod \"8757af56-20fb-439e-adba-7e4e50378936\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " Oct 11 10:47:45.660535 master-2 kubenswrapper[4776]: I1011 10:47:45.660513 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-var-run-resolv-conf\") pod \"8757af56-20fb-439e-adba-7e4e50378936\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " Oct 11 10:47:45.660635 master-2 kubenswrapper[4776]: I1011 10:47:45.660590 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "8757af56-20fb-439e-adba-7e4e50378936" (UID: "8757af56-20fb-439e-adba-7e4e50378936"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:45.660704 master-2 kubenswrapper[4776]: I1011 10:47:45.660639 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "8757af56-20fb-439e-adba-7e4e50378936" (UID: "8757af56-20fb-439e-adba-7e4e50378936"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:45.660704 master-2 kubenswrapper[4776]: I1011 10:47:45.660610 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-resolv-conf\") pod \"8757af56-20fb-439e-adba-7e4e50378936\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " Oct 11 10:47:45.660808 master-2 kubenswrapper[4776]: I1011 10:47:45.660763 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-ca-bundle\") pod \"8757af56-20fb-439e-adba-7e4e50378936\" (UID: \"8757af56-20fb-439e-adba-7e4e50378936\") " Oct 11 10:47:45.660922 master-2 kubenswrapper[4776]: I1011 10:47:45.660882 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "8757af56-20fb-439e-adba-7e4e50378936" (UID: "8757af56-20fb-439e-adba-7e4e50378936"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:47:45.661123 master-2 kubenswrapper[4776]: I1011 10:47:45.661095 4776 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:45.661123 master-2 kubenswrapper[4776]: I1011 10:47:45.661118 4776 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-var-run-resolv-conf\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:45.661204 master-2 kubenswrapper[4776]: I1011 10:47:45.661130 4776 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8757af56-20fb-439e-adba-7e4e50378936-host-resolv-conf\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:45.663494 master-2 kubenswrapper[4776]: I1011 10:47:45.663459 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8757af56-20fb-439e-adba-7e4e50378936-kube-api-access-xpxlw" (OuterVolumeSpecName: "kube-api-access-xpxlw") pod "8757af56-20fb-439e-adba-7e4e50378936" (UID: "8757af56-20fb-439e-adba-7e4e50378936"). InnerVolumeSpecName "kube-api-access-xpxlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:45.761665 master-2 kubenswrapper[4776]: I1011 10:47:45.761595 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpxlw\" (UniqueName: \"kubernetes.io/projected/8757af56-20fb-439e-adba-7e4e50378936-kube-api-access-xpxlw\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:46.543129 master-2 kubenswrapper[4776]: I1011 10:47:46.543070 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-v6dfc" event={"ID":"8757af56-20fb-439e-adba-7e4e50378936","Type":"ContainerDied","Data":"a75184c2ae2e40b8111ddce83c4348030deaf04dbe5bc245f6d525047856b81f"} Oct 11 10:47:46.543129 master-2 kubenswrapper[4776]: I1011 10:47:46.543102 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-v6dfc" Oct 11 10:47:46.543744 master-2 kubenswrapper[4776]: I1011 10:47:46.543115 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a75184c2ae2e40b8111ddce83c4348030deaf04dbe5bc245f6d525047856b81f" Oct 11 10:47:46.544966 master-2 kubenswrapper[4776]: I1011 10:47:46.544936 4776 generic.go:334] "Generic (PLEG): container finished" podID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerID="533b32d657ced976c1f8ae60f6fde2e135c65e3a32a237e000363038aba1c179" exitCode=0 Oct 11 10:47:46.545043 master-2 kubenswrapper[4776]: I1011 10:47:46.544971 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" event={"ID":"4f83a3b5-333b-4284-b03d-c03db77c3241","Type":"ContainerDied","Data":"533b32d657ced976c1f8ae60f6fde2e135c65e3a32a237e000363038aba1c179"} Oct 11 10:47:47.871003 master-2 kubenswrapper[4776]: I1011 10:47:47.870966 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:47:47.993170 master-2 kubenswrapper[4776]: I1011 10:47:47.993113 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-bundle\") pod \"4f83a3b5-333b-4284-b03d-c03db77c3241\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " Oct 11 10:47:47.993376 master-2 kubenswrapper[4776]: I1011 10:47:47.993244 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-util\") pod \"4f83a3b5-333b-4284-b03d-c03db77c3241\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " Oct 11 10:47:47.993376 master-2 kubenswrapper[4776]: I1011 10:47:47.993312 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp5xx\" (UniqueName: \"kubernetes.io/projected/4f83a3b5-333b-4284-b03d-c03db77c3241-kube-api-access-gp5xx\") pod \"4f83a3b5-333b-4284-b03d-c03db77c3241\" (UID: \"4f83a3b5-333b-4284-b03d-c03db77c3241\") " Oct 11 10:47:47.995694 master-2 kubenswrapper[4776]: I1011 10:47:47.995137 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-bundle" (OuterVolumeSpecName: "bundle") pod "4f83a3b5-333b-4284-b03d-c03db77c3241" (UID: "4f83a3b5-333b-4284-b03d-c03db77c3241"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:47.998801 master-2 kubenswrapper[4776]: I1011 10:47:47.996150 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f83a3b5-333b-4284-b03d-c03db77c3241-kube-api-access-gp5xx" (OuterVolumeSpecName: "kube-api-access-gp5xx") pod "4f83a3b5-333b-4284-b03d-c03db77c3241" (UID: "4f83a3b5-333b-4284-b03d-c03db77c3241"). InnerVolumeSpecName "kube-api-access-gp5xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:47:48.007382 master-2 kubenswrapper[4776]: I1011 10:47:48.007335 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-util" (OuterVolumeSpecName: "util") pod "4f83a3b5-333b-4284-b03d-c03db77c3241" (UID: "4f83a3b5-333b-4284-b03d-c03db77c3241"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:47:48.094830 master-2 kubenswrapper[4776]: I1011 10:47:48.094700 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp5xx\" (UniqueName: \"kubernetes.io/projected/4f83a3b5-333b-4284-b03d-c03db77c3241-kube-api-access-gp5xx\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:48.094830 master-2 kubenswrapper[4776]: I1011 10:47:48.094744 4776 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:48.094830 master-2 kubenswrapper[4776]: I1011 10:47:48.094755 4776 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f83a3b5-333b-4284-b03d-c03db77c3241-util\") on node \"master-2\" DevicePath \"\"" Oct 11 10:47:48.557147 master-2 kubenswrapper[4776]: I1011 10:47:48.557050 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" event={"ID":"4f83a3b5-333b-4284-b03d-c03db77c3241","Type":"ContainerDied","Data":"1d3c16a3929d8110ce5e8257b370727ad2dedf5c1aea80548a6befaede03b16c"} Oct 11 10:47:48.557147 master-2 kubenswrapper[4776]: I1011 10:47:48.557104 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3c16a3929d8110ce5e8257b370727ad2dedf5c1aea80548a6befaede03b16c" Oct 11 10:47:48.557147 master-2 kubenswrapper[4776]: I1011 10:47:48.557125 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l" Oct 11 10:48:15.052696 master-2 kubenswrapper[4776]: I1011 10:48:15.052548 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw"] Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052796 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052810 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052822 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052828 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052841 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="pull" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052847 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="pull" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052855 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="util" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052860 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="util" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052872 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8757af56-20fb-439e-adba-7e4e50378936" containerName="assisted-installer-controller" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052879 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8757af56-20fb-439e-adba-7e4e50378936" containerName="assisted-installer-controller" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052888 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052893 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052903 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="util" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052910 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="util" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052916 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="util" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052922 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="util" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052930 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="pull" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052935 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="pull" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: E1011 10:48:15.052949 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="pull" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.052955 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="pull" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.053047 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb231c9e-66e8-4fdf-870d-a927418a72fa" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.053058 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ad118d-adf9-4bbb-93ca-a7ca0e52a1bf" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.053069 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f83a3b5-333b-4284-b03d-c03db77c3241" containerName="extract" Oct 11 10:48:15.053331 master-2 kubenswrapper[4776]: I1011 10:48:15.053076 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="8757af56-20fb-439e-adba-7e4e50378936" containerName="assisted-installer-controller" Oct 11 10:48:15.054087 master-2 kubenswrapper[4776]: I1011 10:48:15.053486 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.057752 master-2 kubenswrapper[4776]: I1011 10:48:15.057691 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Oct 11 10:48:15.070704 master-2 kubenswrapper[4776]: I1011 10:48:15.070648 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw"] Oct 11 10:48:15.109406 master-2 kubenswrapper[4776]: I1011 10:48:15.109321 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d366bc3-091d-4bff-a8fd-c70fb91c1db6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw\" (UID: \"0d366bc3-091d-4bff-a8fd-c70fb91c1db6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.109625 master-2 kubenswrapper[4776]: I1011 10:48:15.109466 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d366bc3-091d-4bff-a8fd-c70fb91c1db6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw\" (UID: \"0d366bc3-091d-4bff-a8fd-c70fb91c1db6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.211464 master-2 kubenswrapper[4776]: I1011 10:48:15.211373 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d366bc3-091d-4bff-a8fd-c70fb91c1db6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw\" (UID: \"0d366bc3-091d-4bff-a8fd-c70fb91c1db6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.211464 master-2 kubenswrapper[4776]: I1011 10:48:15.211473 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d366bc3-091d-4bff-a8fd-c70fb91c1db6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw\" (UID: \"0d366bc3-091d-4bff-a8fd-c70fb91c1db6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.214981 master-2 kubenswrapper[4776]: I1011 10:48:15.214945 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d366bc3-091d-4bff-a8fd-c70fb91c1db6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw\" (UID: \"0d366bc3-091d-4bff-a8fd-c70fb91c1db6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.216379 master-2 kubenswrapper[4776]: I1011 10:48:15.216343 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d366bc3-091d-4bff-a8fd-c70fb91c1db6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw\" (UID: \"0d366bc3-091d-4bff-a8fd-c70fb91c1db6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.370286 master-2 kubenswrapper[4776]: I1011 10:48:15.370132 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" Oct 11 10:48:15.801907 master-2 kubenswrapper[4776]: I1011 10:48:15.801773 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw"] Oct 11 10:48:15.812985 master-2 kubenswrapper[4776]: W1011 10:48:15.812920 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d366bc3_091d_4bff_a8fd_c70fb91c1db6.slice/crio-db4e0a20ccca7e896bb24012c4ab57597dc758f484ddd74a98edd9b073ff182e WatchSource:0}: Error finding container db4e0a20ccca7e896bb24012c4ab57597dc758f484ddd74a98edd9b073ff182e: Status 404 returned error can't find the container with id db4e0a20ccca7e896bb24012c4ab57597dc758f484ddd74a98edd9b073ff182e Oct 11 10:48:16.744078 master-2 kubenswrapper[4776]: I1011 10:48:16.743969 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" event={"ID":"0d366bc3-091d-4bff-a8fd-c70fb91c1db6","Type":"ContainerStarted","Data":"db4e0a20ccca7e896bb24012c4ab57597dc758f484ddd74a98edd9b073ff182e"} Oct 11 10:48:22.786365 master-2 kubenswrapper[4776]: I1011 10:48:22.786297 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" event={"ID":"0d366bc3-091d-4bff-a8fd-c70fb91c1db6","Type":"ContainerStarted","Data":"9e661aa200da88e024b9003c269a859af807771db7a6539e340775bd3699fe74"} Oct 11 10:48:22.842520 master-2 kubenswrapper[4776]: I1011 10:48:22.842450 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw" podStartSLOduration=1.686950357 podStartE2EDuration="7.842435498s" podCreationTimestamp="2025-10-11 10:48:15 +0000 UTC" firstStartedPulling="2025-10-11 10:48:15.816079803 +0000 UTC m=+1330.600506512" lastFinishedPulling="2025-10-11 10:48:21.971564944 +0000 UTC m=+1336.755991653" observedRunningTime="2025-10-11 10:48:22.841417041 +0000 UTC m=+1337.625843760" watchObservedRunningTime="2025-10-11 10:48:22.842435498 +0000 UTC m=+1337.626862207" Oct 11 10:48:53.981698 master-2 kubenswrapper[4776]: I1011 10:48:53.981620 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hwrzt"] Oct 11 10:48:53.984040 master-2 kubenswrapper[4776]: I1011 10:48:53.984000 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:53.988785 master-2 kubenswrapper[4776]: I1011 10:48:53.988649 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 11 10:48:53.988785 master-2 kubenswrapper[4776]: I1011 10:48:53.988694 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 11 10:48:53.989093 master-2 kubenswrapper[4776]: I1011 10:48:53.988982 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 11 10:48:53.989283 master-2 kubenswrapper[4776]: I1011 10:48:53.989192 4776 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 11 10:48:54.069536 master-2 kubenswrapper[4776]: I1011 10:48:54.069504 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-conf\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.069795 master-2 kubenswrapper[4776]: I1011 10:48:54.069778 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-reloader\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.069898 master-2 kubenswrapper[4776]: I1011 10:48:54.069884 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4cd9\" (UniqueName: \"kubernetes.io/projected/a7969839-a9c5-4a06-8472-84032bfb16f1-kube-api-access-p4cd9\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.070000 master-2 kubenswrapper[4776]: I1011 10:48:54.069985 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-sockets\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.070091 master-2 kubenswrapper[4776]: I1011 10:48:54.070079 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-startup\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.070190 master-2 kubenswrapper[4776]: I1011 10:48:54.070177 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-metrics\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.070273 master-2 kubenswrapper[4776]: I1011 10:48:54.070260 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7969839-a9c5-4a06-8472-84032bfb16f1-metrics-certs\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.171927 master-2 kubenswrapper[4776]: I1011 10:48:54.171858 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7969839-a9c5-4a06-8472-84032bfb16f1-metrics-certs\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172138 master-2 kubenswrapper[4776]: I1011 10:48:54.172105 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-conf\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172191 master-2 kubenswrapper[4776]: I1011 10:48:54.172169 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-reloader\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172240 master-2 kubenswrapper[4776]: I1011 10:48:54.172205 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4cd9\" (UniqueName: \"kubernetes.io/projected/a7969839-a9c5-4a06-8472-84032bfb16f1-kube-api-access-p4cd9\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172290 master-2 kubenswrapper[4776]: I1011 10:48:54.172241 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-sockets\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172341 master-2 kubenswrapper[4776]: I1011 10:48:54.172287 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-startup\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172341 master-2 kubenswrapper[4776]: I1011 10:48:54.172318 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-metrics\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172844 master-2 kubenswrapper[4776]: I1011 10:48:54.172787 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-conf\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.172957 master-2 kubenswrapper[4776]: I1011 10:48:54.172920 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-sockets\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.173146 master-2 kubenswrapper[4776]: I1011 10:48:54.173099 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-metrics\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.173146 master-2 kubenswrapper[4776]: I1011 10:48:54.173128 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a7969839-a9c5-4a06-8472-84032bfb16f1-reloader\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.173631 master-2 kubenswrapper[4776]: I1011 10:48:54.173586 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a7969839-a9c5-4a06-8472-84032bfb16f1-frr-startup\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.175517 master-2 kubenswrapper[4776]: I1011 10:48:54.175483 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7969839-a9c5-4a06-8472-84032bfb16f1-metrics-certs\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.270516 master-2 kubenswrapper[4776]: I1011 10:48:54.270374 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4cd9\" (UniqueName: \"kubernetes.io/projected/a7969839-a9c5-4a06-8472-84032bfb16f1-kube-api-access-p4cd9\") pod \"frr-k8s-hwrzt\" (UID: \"a7969839-a9c5-4a06-8472-84032bfb16f1\") " pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.301545 master-2 kubenswrapper[4776]: I1011 10:48:54.301489 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:48:54.911357 master-2 kubenswrapper[4776]: I1011 10:48:54.911314 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-g9nhb"] Oct 11 10:48:54.912268 master-2 kubenswrapper[4776]: I1011 10:48:54.912242 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9nhb" Oct 11 10:48:54.915266 master-2 kubenswrapper[4776]: I1011 10:48:54.915223 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 11 10:48:54.915621 master-2 kubenswrapper[4776]: I1011 10:48:54.915599 4776 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 11 10:48:54.915760 master-2 kubenswrapper[4776]: I1011 10:48:54.915634 4776 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 11 10:48:54.982541 master-2 kubenswrapper[4776]: I1011 10:48:54.982457 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmj2l\" (UniqueName: \"kubernetes.io/projected/018da26f-14c3-468f-bab0-089a91b3ef26-kube-api-access-tmj2l\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:54.982541 master-2 kubenswrapper[4776]: I1011 10:48:54.982544 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-metrics-certs\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:54.983280 master-2 kubenswrapper[4776]: I1011 10:48:54.982585 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/018da26f-14c3-468f-bab0-089a91b3ef26-metallb-excludel2\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:54.983280 master-2 kubenswrapper[4776]: I1011 10:48:54.982639 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.006201 master-2 kubenswrapper[4776]: I1011 10:48:55.006128 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"da562bc3ad6c4f3edf56c5ce222e8644e275ada51cc84d1ac3562af94f3c9d9e"} Oct 11 10:48:55.084240 master-2 kubenswrapper[4776]: I1011 10:48:55.084156 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmj2l\" (UniqueName: \"kubernetes.io/projected/018da26f-14c3-468f-bab0-089a91b3ef26-kube-api-access-tmj2l\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.084240 master-2 kubenswrapper[4776]: I1011 10:48:55.084217 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-metrics-certs\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.084574 master-2 kubenswrapper[4776]: I1011 10:48:55.084267 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/018da26f-14c3-468f-bab0-089a91b3ef26-metallb-excludel2\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.084574 master-2 kubenswrapper[4776]: I1011 10:48:55.084317 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.084574 master-2 kubenswrapper[4776]: E1011 10:48:55.084478 4776 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 11 10:48:55.084574 master-2 kubenswrapper[4776]: E1011 10:48:55.084547 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist podName:018da26f-14c3-468f-bab0-089a91b3ef26 nodeName:}" failed. No retries permitted until 2025-10-11 10:48:55.584525981 +0000 UTC m=+1370.368952690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist") pod "speaker-g9nhb" (UID: "018da26f-14c3-468f-bab0-089a91b3ef26") : secret "metallb-memberlist" not found Oct 11 10:48:55.085207 master-2 kubenswrapper[4776]: I1011 10:48:55.085156 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/018da26f-14c3-468f-bab0-089a91b3ef26-metallb-excludel2\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.086966 master-2 kubenswrapper[4776]: I1011 10:48:55.086916 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-metrics-certs\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.112399 master-2 kubenswrapper[4776]: I1011 10:48:55.112331 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmj2l\" (UniqueName: \"kubernetes.io/projected/018da26f-14c3-468f-bab0-089a91b3ef26-kube-api-access-tmj2l\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.591627 master-2 kubenswrapper[4776]: I1011 10:48:55.591360 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:55.591627 master-2 kubenswrapper[4776]: E1011 10:48:55.591534 4776 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 11 10:48:55.592753 master-2 kubenswrapper[4776]: E1011 10:48:55.592732 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist podName:018da26f-14c3-468f-bab0-089a91b3ef26 nodeName:}" failed. No retries permitted until 2025-10-11 10:48:56.59159782 +0000 UTC m=+1371.376024529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist") pod "speaker-g9nhb" (UID: "018da26f-14c3-468f-bab0-089a91b3ef26") : secret "metallb-memberlist" not found Oct 11 10:48:56.613405 master-2 kubenswrapper[4776]: I1011 10:48:56.613346 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:56.617888 master-2 kubenswrapper[4776]: I1011 10:48:56.617607 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/018da26f-14c3-468f-bab0-089a91b3ef26-memberlist\") pod \"speaker-g9nhb\" (UID: \"018da26f-14c3-468f-bab0-089a91b3ef26\") " pod="metallb-system/speaker-g9nhb" Oct 11 10:48:56.725444 master-2 kubenswrapper[4776]: I1011 10:48:56.725382 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9nhb" Oct 11 10:48:56.756460 master-2 kubenswrapper[4776]: W1011 10:48:56.756323 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod018da26f_14c3_468f_bab0_089a91b3ef26.slice/crio-2e053274556684ab5eb85e6ef62333df56217058d4d5e6826b9fd6d037a67cbc WatchSource:0}: Error finding container 2e053274556684ab5eb85e6ef62333df56217058d4d5e6826b9fd6d037a67cbc: Status 404 returned error can't find the container with id 2e053274556684ab5eb85e6ef62333df56217058d4d5e6826b9fd6d037a67cbc Oct 11 10:48:56.896659 master-2 kubenswrapper[4776]: I1011 10:48:56.895440 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-cwqqw"] Oct 11 10:48:56.896659 master-2 kubenswrapper[4776]: I1011 10:48:56.896390 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:56.910699 master-2 kubenswrapper[4776]: I1011 10:48:56.910633 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 11 10:48:56.911298 master-2 kubenswrapper[4776]: I1011 10:48:56.911222 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 11 10:48:57.018569 master-2 kubenswrapper[4776]: I1011 10:48:57.018509 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpx5g\" (UniqueName: \"kubernetes.io/projected/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-kube-api-access-qpx5g\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.018569 master-2 kubenswrapper[4776]: I1011 10:48:57.018566 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-nmstate-lock\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.018850 master-2 kubenswrapper[4776]: I1011 10:48:57.018598 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-ovs-socket\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.018850 master-2 kubenswrapper[4776]: I1011 10:48:57.018638 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-dbus-socket\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.022735 master-2 kubenswrapper[4776]: I1011 10:48:57.022665 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9nhb" event={"ID":"018da26f-14c3-468f-bab0-089a91b3ef26","Type":"ContainerStarted","Data":"2e053274556684ab5eb85e6ef62333df56217058d4d5e6826b9fd6d037a67cbc"} Oct 11 10:48:57.067709 master-2 kubenswrapper[4776]: I1011 10:48:57.067621 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd"] Oct 11 10:48:57.069633 master-2 kubenswrapper[4776]: I1011 10:48:57.069616 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.077491 master-2 kubenswrapper[4776]: I1011 10:48:57.077443 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 11 10:48:57.077651 master-2 kubenswrapper[4776]: I1011 10:48:57.077451 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 11 10:48:57.088208 master-2 kubenswrapper[4776]: I1011 10:48:57.088169 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd"] Oct 11 10:48:57.120501 master-2 kubenswrapper[4776]: I1011 10:48:57.120427 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-ovs-socket\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.120712 master-2 kubenswrapper[4776]: I1011 10:48:57.120530 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-ovs-socket\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.120712 master-2 kubenswrapper[4776]: I1011 10:48:57.120533 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-dbus-socket\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.120712 master-2 kubenswrapper[4776]: I1011 10:48:57.120634 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2n4f\" (UniqueName: \"kubernetes.io/projected/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-kube-api-access-c2n4f\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.120712 master-2 kubenswrapper[4776]: I1011 10:48:57.120691 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpx5g\" (UniqueName: \"kubernetes.io/projected/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-kube-api-access-qpx5g\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.120843 master-2 kubenswrapper[4776]: I1011 10:48:57.120721 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.120843 master-2 kubenswrapper[4776]: I1011 10:48:57.120755 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.120903 master-2 kubenswrapper[4776]: I1011 10:48:57.120840 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-nmstate-lock\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.120936 master-2 kubenswrapper[4776]: I1011 10:48:57.120757 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-dbus-socket\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.120936 master-2 kubenswrapper[4776]: I1011 10:48:57.120913 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-nmstate-lock\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.155140 master-2 kubenswrapper[4776]: I1011 10:48:57.155009 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpx5g\" (UniqueName: \"kubernetes.io/projected/2fd87adc-6c4a-46cb-9fcc-cd35a48b1614-kube-api-access-qpx5g\") pod \"nmstate-handler-cwqqw\" (UID: \"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614\") " pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.217868 master-2 kubenswrapper[4776]: I1011 10:48:57.217589 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:48:57.221919 master-2 kubenswrapper[4776]: I1011 10:48:57.221885 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2n4f\" (UniqueName: \"kubernetes.io/projected/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-kube-api-access-c2n4f\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.221997 master-2 kubenswrapper[4776]: I1011 10:48:57.221946 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.221997 master-2 kubenswrapper[4776]: I1011 10:48:57.221977 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.223391 master-2 kubenswrapper[4776]: I1011 10:48:57.223335 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.225426 master-2 kubenswrapper[4776]: I1011 10:48:57.225380 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.245240 master-2 kubenswrapper[4776]: I1011 10:48:57.245003 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2n4f\" (UniqueName: \"kubernetes.io/projected/2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2-kube-api-access-c2n4f\") pod \"nmstate-console-plugin-6b874cbd85-p97jd\" (UID: \"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:57.309485 master-2 kubenswrapper[4776]: I1011 10:48:57.309403 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f9d445f57-z6k82"] Oct 11 10:48:57.396121 master-2 kubenswrapper[4776]: I1011 10:48:57.396071 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" Oct 11 10:48:58.029194 master-2 kubenswrapper[4776]: I1011 10:48:58.029143 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cwqqw" event={"ID":"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614","Type":"ContainerStarted","Data":"1cf526033748b2471ac92bdd534612627a5565219bc7675de1e9849ae155faf3"} Oct 11 10:49:00.042736 master-2 kubenswrapper[4776]: I1011 10:49:00.042660 4776 generic.go:334] "Generic (PLEG): container finished" podID="a7969839-a9c5-4a06-8472-84032bfb16f1" containerID="2d85131d0b78968e260fb4a4f6f260fb925669f144ba66cb84a6e4b5e3785fd7" exitCode=0 Oct 11 10:49:00.042736 master-2 kubenswrapper[4776]: I1011 10:49:00.042734 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerDied","Data":"2d85131d0b78968e260fb4a4f6f260fb925669f144ba66cb84a6e4b5e3785fd7"} Oct 11 10:49:00.137367 master-2 kubenswrapper[4776]: I1011 10:49:00.137309 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd"] Oct 11 10:49:01.053341 master-2 kubenswrapper[4776]: I1011 10:49:01.053298 4776 generic.go:334] "Generic (PLEG): container finished" podID="a7969839-a9c5-4a06-8472-84032bfb16f1" containerID="810fccd45766370baebceb462bd6ca21c050ab404c82e5965f717a60ba4426b6" exitCode=0 Oct 11 10:49:01.053859 master-2 kubenswrapper[4776]: I1011 10:49:01.053361 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerDied","Data":"810fccd45766370baebceb462bd6ca21c050ab404c82e5965f717a60ba4426b6"} Oct 11 10:49:01.057530 master-2 kubenswrapper[4776]: I1011 10:49:01.057461 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" event={"ID":"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2","Type":"ContainerStarted","Data":"2a73c2e72ee1fbbc2275604c3cf3ac9a8c5b90432c8781d1788942420cf09440"} Oct 11 10:49:02.066118 master-2 kubenswrapper[4776]: I1011 10:49:02.066039 4776 generic.go:334] "Generic (PLEG): container finished" podID="a7969839-a9c5-4a06-8472-84032bfb16f1" containerID="6e1a2e8eb0cb9ca2b15ad391e2536486bad31c4622a2380f6d76ab2751f0da07" exitCode=0 Oct 11 10:49:02.066118 master-2 kubenswrapper[4776]: I1011 10:49:02.066106 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerDied","Data":"6e1a2e8eb0cb9ca2b15ad391e2536486bad31c4622a2380f6d76ab2751f0da07"} Oct 11 10:49:04.080011 master-2 kubenswrapper[4776]: I1011 10:49:04.079946 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cwqqw" event={"ID":"2fd87adc-6c4a-46cb-9fcc-cd35a48b1614","Type":"ContainerStarted","Data":"db48ef04b71e4abfea32101496ffb211fb94e0e5b9db79472ce75a84db722669"} Oct 11 10:49:04.080782 master-2 kubenswrapper[4776]: I1011 10:49:04.080199 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:49:04.081912 master-2 kubenswrapper[4776]: I1011 10:49:04.081872 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9nhb" event={"ID":"018da26f-14c3-468f-bab0-089a91b3ef26","Type":"ContainerStarted","Data":"0c25f65d39c11a10d2ab94e1a284849d92d987a4ba68949b08055afb062c5a71"} Oct 11 10:49:04.085435 master-2 kubenswrapper[4776]: I1011 10:49:04.085407 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"660146d92780a8daa1e7cc0ec9356a09e60f843c3b31e42d0bcac924fccacf2f"} Oct 11 10:49:04.085435 master-2 kubenswrapper[4776]: I1011 10:49:04.085431 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"f4cf841cf8a891127a1eb84034cb7e432bebdb81a9cdca29523823c8fcf23a03"} Oct 11 10:49:04.085567 master-2 kubenswrapper[4776]: I1011 10:49:04.085442 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"fdfb7b76d3283faf73e957e99c95a0486b90e03019656b0c33da0c5f92f4244a"} Oct 11 10:49:04.085567 master-2 kubenswrapper[4776]: I1011 10:49:04.085454 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"15d9491ebe5a9a0c85a026d09f48cc74e3f7d501690126d8e7257c9710a12e3b"} Oct 11 10:49:04.087019 master-2 kubenswrapper[4776]: I1011 10:49:04.086951 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" event={"ID":"2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2","Type":"ContainerStarted","Data":"6365afc3d8d83f54bd90ac532aee4f68c2513147783f4f6b8cc14f5f69b9a2aa"} Oct 11 10:49:04.108012 master-2 kubenswrapper[4776]: I1011 10:49:04.107881 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-cwqqw" podStartSLOduration=1.8506049180000002 podStartE2EDuration="8.107861327s" podCreationTimestamp="2025-10-11 10:48:56 +0000 UTC" firstStartedPulling="2025-10-11 10:48:57.263827849 +0000 UTC m=+1372.048254558" lastFinishedPulling="2025-10-11 10:49:03.521084258 +0000 UTC m=+1378.305510967" observedRunningTime="2025-10-11 10:49:04.105135774 +0000 UTC m=+1378.889562493" watchObservedRunningTime="2025-10-11 10:49:04.107861327 +0000 UTC m=+1378.892288036" Oct 11 10:49:04.144751 master-2 kubenswrapper[4776]: I1011 10:49:04.134159 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd" podStartSLOduration=3.738643448 podStartE2EDuration="7.134139239s" podCreationTimestamp="2025-10-11 10:48:57 +0000 UTC" firstStartedPulling="2025-10-11 10:49:00.128194118 +0000 UTC m=+1374.912620827" lastFinishedPulling="2025-10-11 10:49:03.523689909 +0000 UTC m=+1378.308116618" observedRunningTime="2025-10-11 10:49:04.130367027 +0000 UTC m=+1378.914793746" watchObservedRunningTime="2025-10-11 10:49:04.134139239 +0000 UTC m=+1378.918565958" Oct 11 10:49:05.096175 master-2 kubenswrapper[4776]: I1011 10:49:05.096114 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9nhb" event={"ID":"018da26f-14c3-468f-bab0-089a91b3ef26","Type":"ContainerStarted","Data":"6487986e2b8d76acb82a49f89b5d910e5b5cf96fda73c1595fbde2cc0649dd49"} Oct 11 10:49:05.217210 master-2 kubenswrapper[4776]: I1011 10:49:05.217127 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-g9nhb" podStartSLOduration=3.33585625 podStartE2EDuration="11.217108382s" podCreationTimestamp="2025-10-11 10:48:54 +0000 UTC" firstStartedPulling="2025-10-11 10:48:56.758727083 +0000 UTC m=+1371.543153792" lastFinishedPulling="2025-10-11 10:49:04.639979215 +0000 UTC m=+1379.424405924" observedRunningTime="2025-10-11 10:49:05.21443465 +0000 UTC m=+1379.998861409" watchObservedRunningTime="2025-10-11 10:49:05.217108382 +0000 UTC m=+1380.001535091" Oct 11 10:49:06.106438 master-2 kubenswrapper[4776]: I1011 10:49:06.106384 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"abe839578a5067a2c1195dd85fb5d672d1e4d38a472f5351b4b8f42fcad85ae8"} Oct 11 10:49:06.106438 master-2 kubenswrapper[4776]: I1011 10:49:06.106436 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwrzt" event={"ID":"a7969839-a9c5-4a06-8472-84032bfb16f1","Type":"ContainerStarted","Data":"57b0bf7b7f3b477b71ff3074a899bba6b58a425b15b987e5bcb3953f1065229d"} Oct 11 10:49:06.107152 master-2 kubenswrapper[4776]: I1011 10:49:06.106534 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-g9nhb" Oct 11 10:49:06.146934 master-2 kubenswrapper[4776]: I1011 10:49:06.146840 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hwrzt" podStartSLOduration=2.701471696 podStartE2EDuration="13.146821972s" podCreationTimestamp="2025-10-11 10:48:53 +0000 UTC" firstStartedPulling="2025-10-11 10:48:54.45303563 +0000 UTC m=+1369.237462339" lastFinishedPulling="2025-10-11 10:49:04.898385906 +0000 UTC m=+1379.682812615" observedRunningTime="2025-10-11 10:49:06.142498085 +0000 UTC m=+1380.926924794" watchObservedRunningTime="2025-10-11 10:49:06.146821972 +0000 UTC m=+1380.931248691" Oct 11 10:49:07.113957 master-2 kubenswrapper[4776]: I1011 10:49:07.113901 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:49:09.302618 master-2 kubenswrapper[4776]: I1011 10:49:09.302550 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:49:09.339716 master-2 kubenswrapper[4776]: I1011 10:49:09.339406 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:49:09.504303 master-2 kubenswrapper[4776]: I1011 10:49:09.504173 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-5-master-2"] Oct 11 10:49:09.513721 master-2 kubenswrapper[4776]: I1011 10:49:09.513636 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-5-master-2"] Oct 11 10:49:10.068118 master-2 kubenswrapper[4776]: I1011 10:49:10.068064 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebeec22d-9309-4efd-bbc0-f44c750a258c" path="/var/lib/kubelet/pods/ebeec22d-9309-4efd-bbc0-f44c750a258c/volumes" Oct 11 10:49:12.258153 master-2 kubenswrapper[4776]: I1011 10:49:12.258042 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-cwqqw" Oct 11 10:49:14.311368 master-2 kubenswrapper[4776]: I1011 10:49:14.308802 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hwrzt" Oct 11 10:49:15.579762 master-2 kubenswrapper[4776]: I1011 10:49:15.579713 4776 scope.go:117] "RemoveContainer" containerID="25ad594b9284fd2089fd6abfaa970257ef0a465b5e9177e3d753cb32feaf3eb1" Oct 11 10:49:16.730308 master-2 kubenswrapper[4776]: I1011 10:49:16.730198 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-g9nhb" Oct 11 10:49:22.347286 master-2 kubenswrapper[4776]: I1011 10:49:22.347219 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6f9d445f57-z6k82" podUID="9ac259b6-cf42-49b4-b1b7-76cc9072d059" containerName="console" containerID="cri-o://64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c" gracePeriod=15 Oct 11 10:49:22.791841 master-2 kubenswrapper[4776]: I1011 10:49:22.791162 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f9d445f57-z6k82_9ac259b6-cf42-49b4-b1b7-76cc9072d059/console/0.log" Oct 11 10:49:22.791841 master-2 kubenswrapper[4776]: I1011 10:49:22.791266 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:49:22.823460 master-2 kubenswrapper[4776]: I1011 10:49:22.823390 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-config\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824041 master-2 kubenswrapper[4776]: I1011 10:49:22.824004 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-serving-cert\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824116 master-2 kubenswrapper[4776]: I1011 10:49:22.824075 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-oauth-config\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824116 master-2 kubenswrapper[4776]: I1011 10:49:22.824098 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-trusted-ca-bundle\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824116 master-2 kubenswrapper[4776]: I1011 10:49:22.824098 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-config" (OuterVolumeSpecName: "console-config") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:49:22.824267 master-2 kubenswrapper[4776]: I1011 10:49:22.824154 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzqm8\" (UniqueName: \"kubernetes.io/projected/9ac259b6-cf42-49b4-b1b7-76cc9072d059-kube-api-access-tzqm8\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824267 master-2 kubenswrapper[4776]: I1011 10:49:22.824196 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-oauth-serving-cert\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824267 master-2 kubenswrapper[4776]: I1011 10:49:22.824255 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-service-ca\") pod \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\" (UID: \"9ac259b6-cf42-49b4-b1b7-76cc9072d059\") " Oct 11 10:49:22.824558 master-2 kubenswrapper[4776]: I1011 10:49:22.824526 4776 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:22.825210 master-2 kubenswrapper[4776]: I1011 10:49:22.825175 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:49:22.825336 master-2 kubenswrapper[4776]: I1011 10:49:22.825285 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-service-ca" (OuterVolumeSpecName: "service-ca") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:49:22.826575 master-2 kubenswrapper[4776]: I1011 10:49:22.826408 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:49:22.831283 master-2 kubenswrapper[4776]: I1011 10:49:22.830789 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:49:22.831283 master-2 kubenswrapper[4776]: I1011 10:49:22.831072 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac259b6-cf42-49b4-b1b7-76cc9072d059-kube-api-access-tzqm8" (OuterVolumeSpecName: "kube-api-access-tzqm8") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "kube-api-access-tzqm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:49:22.833546 master-2 kubenswrapper[4776]: I1011 10:49:22.833528 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9ac259b6-cf42-49b4-b1b7-76cc9072d059" (UID: "9ac259b6-cf42-49b4-b1b7-76cc9072d059"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:49:22.925651 master-2 kubenswrapper[4776]: I1011 10:49:22.925608 4776 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-oauth-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:22.925651 master-2 kubenswrapper[4776]: I1011 10:49:22.925645 4776 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-trusted-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:22.925651 master-2 kubenswrapper[4776]: I1011 10:49:22.925655 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzqm8\" (UniqueName: \"kubernetes.io/projected/9ac259b6-cf42-49b4-b1b7-76cc9072d059-kube-api-access-tzqm8\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:22.925651 master-2 kubenswrapper[4776]: I1011 10:49:22.925665 4776 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-oauth-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:22.925968 master-2 kubenswrapper[4776]: I1011 10:49:22.925713 4776 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ac259b6-cf42-49b4-b1b7-76cc9072d059-service-ca\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:22.925968 master-2 kubenswrapper[4776]: I1011 10:49:22.925725 4776 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ac259b6-cf42-49b4-b1b7-76cc9072d059-console-serving-cert\") on node \"master-2\" DevicePath \"\"" Oct 11 10:49:23.252269 master-2 kubenswrapper[4776]: I1011 10:49:23.252162 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f9d445f57-z6k82_9ac259b6-cf42-49b4-b1b7-76cc9072d059/console/0.log" Oct 11 10:49:23.252661 master-2 kubenswrapper[4776]: I1011 10:49:23.252293 4776 generic.go:334] "Generic (PLEG): container finished" podID="9ac259b6-cf42-49b4-b1b7-76cc9072d059" containerID="64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c" exitCode=2 Oct 11 10:49:23.252661 master-2 kubenswrapper[4776]: I1011 10:49:23.252385 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f9d445f57-z6k82" Oct 11 10:49:23.255115 master-2 kubenswrapper[4776]: I1011 10:49:23.252367 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f9d445f57-z6k82" event={"ID":"9ac259b6-cf42-49b4-b1b7-76cc9072d059","Type":"ContainerDied","Data":"64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c"} Oct 11 10:49:23.255115 master-2 kubenswrapper[4776]: I1011 10:49:23.252837 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f9d445f57-z6k82" event={"ID":"9ac259b6-cf42-49b4-b1b7-76cc9072d059","Type":"ContainerDied","Data":"88f0b704732f3071e12ed605ea3d2e3766c911ff93df7e212bf6bc543d745d21"} Oct 11 10:49:23.255115 master-2 kubenswrapper[4776]: I1011 10:49:23.252892 4776 scope.go:117] "RemoveContainer" containerID="64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c" Oct 11 10:49:23.289065 master-2 kubenswrapper[4776]: I1011 10:49:23.288616 4776 scope.go:117] "RemoveContainer" containerID="64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c" Oct 11 10:49:23.289536 master-2 kubenswrapper[4776]: E1011 10:49:23.289429 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c\": container with ID starting with 64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c not found: ID does not exist" containerID="64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c" Oct 11 10:49:23.289727 master-2 kubenswrapper[4776]: I1011 10:49:23.289601 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c"} err="failed to get container status \"64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c\": rpc error: code = NotFound desc = could not find container \"64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c\": container with ID starting with 64a8a37f1f0e59661046f08e00be97caa85ccc10d7a3eb199567c04f84ade75c not found: ID does not exist" Oct 11 10:49:23.298382 master-2 kubenswrapper[4776]: I1011 10:49:23.298323 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f9d445f57-z6k82"] Oct 11 10:49:23.302584 master-2 kubenswrapper[4776]: I1011 10:49:23.302523 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6f9d445f57-z6k82"] Oct 11 10:49:23.946995 master-2 kubenswrapper[4776]: I1011 10:49:23.946919 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69f8677c95-z9d9d"] Oct 11 10:49:23.947659 master-2 kubenswrapper[4776]: E1011 10:49:23.947237 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac259b6-cf42-49b4-b1b7-76cc9072d059" containerName="console" Oct 11 10:49:23.947659 master-2 kubenswrapper[4776]: I1011 10:49:23.947254 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac259b6-cf42-49b4-b1b7-76cc9072d059" containerName="console" Oct 11 10:49:23.947659 master-2 kubenswrapper[4776]: I1011 10:49:23.947411 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac259b6-cf42-49b4-b1b7-76cc9072d059" containerName="console" Oct 11 10:49:23.948027 master-2 kubenswrapper[4776]: I1011 10:49:23.948000 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:23.956363 master-2 kubenswrapper[4776]: I1011 10:49:23.956312 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 11 10:49:23.957403 master-2 kubenswrapper[4776]: I1011 10:49:23.957345 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8wmjp" Oct 11 10:49:23.957806 master-2 kubenswrapper[4776]: I1011 10:49:23.957769 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 11 10:49:23.958360 master-2 kubenswrapper[4776]: I1011 10:49:23.958306 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 11 10:49:23.958605 master-2 kubenswrapper[4776]: I1011 10:49:23.958415 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 11 10:49:23.958926 master-2 kubenswrapper[4776]: I1011 10:49:23.958848 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 11 10:49:23.976765 master-2 kubenswrapper[4776]: I1011 10:49:23.973071 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 11 10:49:23.976765 master-2 kubenswrapper[4776]: I1011 10:49:23.975621 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69f8677c95-z9d9d"] Oct 11 10:49:24.044244 master-2 kubenswrapper[4776]: I1011 10:49:24.044096 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-oauth-serving-cert\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.044495 master-2 kubenswrapper[4776]: I1011 10:49:24.044221 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm685\" (UniqueName: \"kubernetes.io/projected/722d06e2-c934-4ba0-82e4-51c4b2104851-kube-api-access-qm685\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.044854 master-2 kubenswrapper[4776]: I1011 10:49:24.044814 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-console-config\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.045106 master-2 kubenswrapper[4776]: I1011 10:49:24.045051 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/722d06e2-c934-4ba0-82e4-51c4b2104851-console-oauth-config\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.045187 master-2 kubenswrapper[4776]: I1011 10:49:24.045112 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-trusted-ca-bundle\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.045245 master-2 kubenswrapper[4776]: I1011 10:49:24.045183 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/722d06e2-c934-4ba0-82e4-51c4b2104851-console-serving-cert\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.045427 master-2 kubenswrapper[4776]: I1011 10:49:24.045349 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-service-ca\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.070656 master-2 kubenswrapper[4776]: I1011 10:49:24.070592 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac259b6-cf42-49b4-b1b7-76cc9072d059" path="/var/lib/kubelet/pods/9ac259b6-cf42-49b4-b1b7-76cc9072d059/volumes" Oct 11 10:49:24.147335 master-2 kubenswrapper[4776]: I1011 10:49:24.147247 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-console-config\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.147567 master-2 kubenswrapper[4776]: I1011 10:49:24.147429 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/722d06e2-c934-4ba0-82e4-51c4b2104851-console-oauth-config\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.147567 master-2 kubenswrapper[4776]: I1011 10:49:24.147482 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-trusted-ca-bundle\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.147567 master-2 kubenswrapper[4776]: I1011 10:49:24.147532 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/722d06e2-c934-4ba0-82e4-51c4b2104851-console-serving-cert\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.147691 master-2 kubenswrapper[4776]: I1011 10:49:24.147568 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-service-ca\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.147691 master-2 kubenswrapper[4776]: I1011 10:49:24.147619 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-oauth-serving-cert\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.147691 master-2 kubenswrapper[4776]: I1011 10:49:24.147642 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm685\" (UniqueName: \"kubernetes.io/projected/722d06e2-c934-4ba0-82e4-51c4b2104851-kube-api-access-qm685\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.150360 master-2 kubenswrapper[4776]: I1011 10:49:24.150279 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-console-config\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.151954 master-2 kubenswrapper[4776]: I1011 10:49:24.151906 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-service-ca\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.152142 master-2 kubenswrapper[4776]: I1011 10:49:24.152106 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-trusted-ca-bundle\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.153180 master-2 kubenswrapper[4776]: I1011 10:49:24.152989 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/722d06e2-c934-4ba0-82e4-51c4b2104851-oauth-serving-cert\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.155597 master-2 kubenswrapper[4776]: I1011 10:49:24.155537 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/722d06e2-c934-4ba0-82e4-51c4b2104851-console-serving-cert\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.156484 master-2 kubenswrapper[4776]: I1011 10:49:24.156445 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/722d06e2-c934-4ba0-82e4-51c4b2104851-console-oauth-config\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.188217 master-2 kubenswrapper[4776]: I1011 10:49:24.188139 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm685\" (UniqueName: \"kubernetes.io/projected/722d06e2-c934-4ba0-82e4-51c4b2104851-kube-api-access-qm685\") pod \"console-69f8677c95-z9d9d\" (UID: \"722d06e2-c934-4ba0-82e4-51c4b2104851\") " pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.271345 master-2 kubenswrapper[4776]: I1011 10:49:24.271227 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:24.675882 master-2 kubenswrapper[4776]: I1011 10:49:24.675852 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69f8677c95-z9d9d"] Oct 11 10:49:24.682659 master-2 kubenswrapper[4776]: W1011 10:49:24.682632 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod722d06e2_c934_4ba0_82e4_51c4b2104851.slice/crio-cf945511894f0824106ac2fc18130f0fa56882302f89dd12285078f2b110ef3e WatchSource:0}: Error finding container cf945511894f0824106ac2fc18130f0fa56882302f89dd12285078f2b110ef3e: Status 404 returned error can't find the container with id cf945511894f0824106ac2fc18130f0fa56882302f89dd12285078f2b110ef3e Oct 11 10:49:25.276239 master-2 kubenswrapper[4776]: I1011 10:49:25.276176 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69f8677c95-z9d9d" event={"ID":"722d06e2-c934-4ba0-82e4-51c4b2104851","Type":"ContainerStarted","Data":"044fa5901fba56d1be7c30b3901846c5c8e6b627ff7cc261145d28f59a7e889c"} Oct 11 10:49:25.276239 master-2 kubenswrapper[4776]: I1011 10:49:25.276224 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69f8677c95-z9d9d" event={"ID":"722d06e2-c934-4ba0-82e4-51c4b2104851","Type":"ContainerStarted","Data":"cf945511894f0824106ac2fc18130f0fa56882302f89dd12285078f2b110ef3e"} Oct 11 10:49:25.314696 master-2 kubenswrapper[4776]: I1011 10:49:25.314592 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69f8677c95-z9d9d" podStartSLOduration=28.314569281 podStartE2EDuration="28.314569281s" podCreationTimestamp="2025-10-11 10:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:49:25.311254032 +0000 UTC m=+1400.095680781" watchObservedRunningTime="2025-10-11 10:49:25.314569281 +0000 UTC m=+1400.098996000" Oct 11 10:49:26.297295 master-2 kubenswrapper[4776]: I1011 10:49:26.297233 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-kjcgl"] Oct 11 10:49:26.298172 master-2 kubenswrapper[4776]: I1011 10:49:26.298138 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.301519 master-2 kubenswrapper[4776]: I1011 10:49:26.301481 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Oct 11 10:49:26.302220 master-2 kubenswrapper[4776]: I1011 10:49:26.302193 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Oct 11 10:49:26.302462 master-2 kubenswrapper[4776]: I1011 10:49:26.302423 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Oct 11 10:49:26.325816 master-2 kubenswrapper[4776]: I1011 10:49:26.325766 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-kjcgl"] Oct 11 10:49:26.380553 master-2 kubenswrapper[4776]: I1011 10:49:26.380488 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-registration-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.380798 master-2 kubenswrapper[4776]: I1011 10:49:26.380767 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-run-udev\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.380857 master-2 kubenswrapper[4776]: I1011 10:49:26.380823 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-pod-volumes-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.380912 master-2 kubenswrapper[4776]: I1011 10:49:26.380896 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-lvmd-config\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.380960 master-2 kubenswrapper[4776]: I1011 10:49:26.380920 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-node-plugin-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.380960 master-2 kubenswrapper[4776]: I1011 10:49:26.380956 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-csi-plugin-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.381041 master-2 kubenswrapper[4776]: I1011 10:49:26.380975 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/5648f79c-6a71-4f6f-8bde-b85a18b200bb-metrics-cert\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.381041 master-2 kubenswrapper[4776]: I1011 10:49:26.381000 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-device-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.381041 master-2 kubenswrapper[4776]: I1011 10:49:26.381021 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-file-lock-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.381173 master-2 kubenswrapper[4776]: I1011 10:49:26.381042 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-sys\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.381173 master-2 kubenswrapper[4776]: I1011 10:49:26.381065 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbhmz\" (UniqueName: \"kubernetes.io/projected/5648f79c-6a71-4f6f-8bde-b85a18b200bb-kube-api-access-kbhmz\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483005 master-2 kubenswrapper[4776]: I1011 10:49:26.482931 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-run-udev\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483005 master-2 kubenswrapper[4776]: I1011 10:49:26.483005 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-pod-volumes-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483262 master-2 kubenswrapper[4776]: I1011 10:49:26.483121 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-lvmd-config\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483262 master-2 kubenswrapper[4776]: I1011 10:49:26.483157 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-node-plugin-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483262 master-2 kubenswrapper[4776]: I1011 10:49:26.483147 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-run-udev\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483356 master-2 kubenswrapper[4776]: I1011 10:49:26.483264 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-csi-plugin-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483356 master-2 kubenswrapper[4776]: I1011 10:49:26.483272 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-pod-volumes-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483356 master-2 kubenswrapper[4776]: I1011 10:49:26.483290 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/5648f79c-6a71-4f6f-8bde-b85a18b200bb-metrics-cert\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483487 master-2 kubenswrapper[4776]: I1011 10:49:26.483367 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-file-lock-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483487 master-2 kubenswrapper[4776]: I1011 10:49:26.483395 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-device-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483487 master-2 kubenswrapper[4776]: I1011 10:49:26.483422 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-sys\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483487 master-2 kubenswrapper[4776]: I1011 10:49:26.483470 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbhmz\" (UniqueName: \"kubernetes.io/projected/5648f79c-6a71-4f6f-8bde-b85a18b200bb-kube-api-access-kbhmz\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483617 master-2 kubenswrapper[4776]: I1011 10:49:26.483516 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-registration-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483617 master-2 kubenswrapper[4776]: I1011 10:49:26.483519 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-lvmd-config\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.483617 master-2 kubenswrapper[4776]: I1011 10:49:26.483604 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-registration-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.484032 master-2 kubenswrapper[4776]: I1011 10:49:26.483650 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-device-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.484032 master-2 kubenswrapper[4776]: I1011 10:49:26.483700 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-sys\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.484032 master-2 kubenswrapper[4776]: I1011 10:49:26.483699 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-file-lock-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.484032 master-2 kubenswrapper[4776]: I1011 10:49:26.483891 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-node-plugin-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.484032 master-2 kubenswrapper[4776]: I1011 10:49:26.483961 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/5648f79c-6a71-4f6f-8bde-b85a18b200bb-csi-plugin-dir\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.487771 master-2 kubenswrapper[4776]: I1011 10:49:26.487732 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/5648f79c-6a71-4f6f-8bde-b85a18b200bb-metrics-cert\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.516115 master-2 kubenswrapper[4776]: I1011 10:49:26.516039 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbhmz\" (UniqueName: \"kubernetes.io/projected/5648f79c-6a71-4f6f-8bde-b85a18b200bb-kube-api-access-kbhmz\") pod \"vg-manager-kjcgl\" (UID: \"5648f79c-6a71-4f6f-8bde-b85a18b200bb\") " pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:26.614897 master-2 kubenswrapper[4776]: I1011 10:49:26.614762 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:27.106839 master-2 kubenswrapper[4776]: I1011 10:49:27.106582 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-kjcgl"] Oct 11 10:49:27.120970 master-2 kubenswrapper[4776]: W1011 10:49:27.120913 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5648f79c_6a71_4f6f_8bde_b85a18b200bb.slice/crio-9b2ea25ac9d68a89f1585b51b76f1e1c447645d1f6f2942ca575a9928bfbc186 WatchSource:0}: Error finding container 9b2ea25ac9d68a89f1585b51b76f1e1c447645d1f6f2942ca575a9928bfbc186: Status 404 returned error can't find the container with id 9b2ea25ac9d68a89f1585b51b76f1e1c447645d1f6f2942ca575a9928bfbc186 Oct 11 10:49:27.289432 master-2 kubenswrapper[4776]: I1011 10:49:27.289382 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-kjcgl" event={"ID":"5648f79c-6a71-4f6f-8bde-b85a18b200bb","Type":"ContainerStarted","Data":"9b2ea25ac9d68a89f1585b51b76f1e1c447645d1f6f2942ca575a9928bfbc186"} Oct 11 10:49:32.330430 master-2 kubenswrapper[4776]: I1011 10:49:32.330370 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-kjcgl" event={"ID":"5648f79c-6a71-4f6f-8bde-b85a18b200bb","Type":"ContainerStarted","Data":"fd081b69fcb20cd544ccfa2cdd1a44c6ed00db6df9d8d498706580d6534ed198"} Oct 11 10:49:32.386209 master-2 kubenswrapper[4776]: I1011 10:49:32.386123 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-kjcgl" podStartSLOduration=1.6736251 podStartE2EDuration="6.386105385s" podCreationTimestamp="2025-10-11 10:49:26 +0000 UTC" firstStartedPulling="2025-10-11 10:49:27.125698133 +0000 UTC m=+1401.910124852" lastFinishedPulling="2025-10-11 10:49:31.838178418 +0000 UTC m=+1406.622605137" observedRunningTime="2025-10-11 10:49:32.382347944 +0000 UTC m=+1407.166774653" watchObservedRunningTime="2025-10-11 10:49:32.386105385 +0000 UTC m=+1407.170532094" Oct 11 10:49:34.272602 master-2 kubenswrapper[4776]: I1011 10:49:34.272537 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:34.272602 master-2 kubenswrapper[4776]: I1011 10:49:34.272595 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:34.278148 master-2 kubenswrapper[4776]: I1011 10:49:34.278075 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:34.370796 master-2 kubenswrapper[4776]: I1011 10:49:34.368953 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-kjcgl_5648f79c-6a71-4f6f-8bde-b85a18b200bb/vg-manager/0.log" Oct 11 10:49:34.370796 master-2 kubenswrapper[4776]: I1011 10:49:34.369013 4776 generic.go:334] "Generic (PLEG): container finished" podID="5648f79c-6a71-4f6f-8bde-b85a18b200bb" containerID="fd081b69fcb20cd544ccfa2cdd1a44c6ed00db6df9d8d498706580d6534ed198" exitCode=1 Oct 11 10:49:34.370796 master-2 kubenswrapper[4776]: I1011 10:49:34.369391 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-kjcgl" event={"ID":"5648f79c-6a71-4f6f-8bde-b85a18b200bb","Type":"ContainerDied","Data":"fd081b69fcb20cd544ccfa2cdd1a44c6ed00db6df9d8d498706580d6534ed198"} Oct 11 10:49:34.370796 master-2 kubenswrapper[4776]: I1011 10:49:34.370181 4776 scope.go:117] "RemoveContainer" containerID="fd081b69fcb20cd544ccfa2cdd1a44c6ed00db6df9d8d498706580d6534ed198" Oct 11 10:49:34.373951 master-2 kubenswrapper[4776]: I1011 10:49:34.373835 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69f8677c95-z9d9d" Oct 11 10:49:34.659388 master-2 kubenswrapper[4776]: I1011 10:49:34.659341 4776 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Oct 11 10:49:34.920720 master-2 kubenswrapper[4776]: I1011 10:49:34.920577 4776 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2025-10-11T10:49:34.659368998Z","Handler":null,"Name":""} Oct 11 10:49:34.923286 master-2 kubenswrapper[4776]: I1011 10:49:34.923242 4776 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Oct 11 10:49:34.923358 master-2 kubenswrapper[4776]: I1011 10:49:34.923306 4776 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Oct 11 10:49:35.380727 master-2 kubenswrapper[4776]: I1011 10:49:35.380566 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-kjcgl_5648f79c-6a71-4f6f-8bde-b85a18b200bb/vg-manager/0.log" Oct 11 10:49:35.381701 master-2 kubenswrapper[4776]: I1011 10:49:35.381635 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-kjcgl" event={"ID":"5648f79c-6a71-4f6f-8bde-b85a18b200bb","Type":"ContainerStarted","Data":"32d9e9657fcf1cce6ae9505666e681688cdea8f91ec69725af5a1043b546b958"} Oct 11 10:49:36.616210 master-2 kubenswrapper[4776]: I1011 10:49:36.616059 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:36.618330 master-2 kubenswrapper[4776]: I1011 10:49:36.618078 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:37.393286 master-2 kubenswrapper[4776]: I1011 10:49:37.393224 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:49:37.394843 master-2 kubenswrapper[4776]: I1011 10:49:37.394792 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-kjcgl" Oct 11 10:50:08.652842 master-2 kubenswrapper[4776]: I1011 10:50:08.652784 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-2"] Oct 11 10:50:08.654020 master-2 kubenswrapper[4776]: I1011 10:50:08.653817 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:08.660807 master-2 kubenswrapper[4776]: I1011 10:50:08.660759 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-t28rg" Oct 11 10:50:08.818276 master-2 kubenswrapper[4776]: I1011 10:50:08.818222 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ae11ca-a8d5-4a55-9898-269dfe907446-kube-api-access\") pod \"revision-pruner-6-master-2\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:08.818571 master-2 kubenswrapper[4776]: I1011 10:50:08.818309 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ae11ca-a8d5-4a55-9898-269dfe907446-kubelet-dir\") pod \"revision-pruner-6-master-2\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:08.920072 master-2 kubenswrapper[4776]: I1011 10:50:08.920015 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ae11ca-a8d5-4a55-9898-269dfe907446-kube-api-access\") pod \"revision-pruner-6-master-2\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:08.920072 master-2 kubenswrapper[4776]: I1011 10:50:08.920074 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ae11ca-a8d5-4a55-9898-269dfe907446-kubelet-dir\") pod \"revision-pruner-6-master-2\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:08.920352 master-2 kubenswrapper[4776]: I1011 10:50:08.920151 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ae11ca-a8d5-4a55-9898-269dfe907446-kubelet-dir\") pod \"revision-pruner-6-master-2\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:08.933833 master-2 kubenswrapper[4776]: I1011 10:50:08.933755 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-2"] Oct 11 10:50:09.449334 master-2 kubenswrapper[4776]: I1011 10:50:09.449245 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ae11ca-a8d5-4a55-9898-269dfe907446-kube-api-access\") pod \"revision-pruner-6-master-2\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:09.576204 master-2 kubenswrapper[4776]: I1011 10:50:09.576132 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:09.790427 master-2 kubenswrapper[4776]: I1011 10:50:09.790372 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w"] Oct 11 10:50:09.791767 master-2 kubenswrapper[4776]: I1011 10:50:09.791738 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:09.794162 master-2 kubenswrapper[4776]: I1011 10:50:09.794123 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 11 10:50:09.795020 master-2 kubenswrapper[4776]: I1011 10:50:09.794971 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 11 10:50:09.814788 master-2 kubenswrapper[4776]: I1011 10:50:09.811467 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w"] Oct 11 10:50:09.935715 master-2 kubenswrapper[4776]: I1011 10:50:09.935589 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-bundle\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:09.936026 master-2 kubenswrapper[4776]: I1011 10:50:09.935775 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-util\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:09.936026 master-2 kubenswrapper[4776]: I1011 10:50:09.935834 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v48t\" (UniqueName: \"kubernetes.io/projected/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-kube-api-access-5v48t\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.000710 master-2 kubenswrapper[4776]: I1011 10:50:10.000532 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-2"] Oct 11 10:50:10.036773 master-2 kubenswrapper[4776]: I1011 10:50:10.036713 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-util\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.036773 master-2 kubenswrapper[4776]: I1011 10:50:10.036774 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v48t\" (UniqueName: \"kubernetes.io/projected/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-kube-api-access-5v48t\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.036989 master-2 kubenswrapper[4776]: I1011 10:50:10.036840 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-bundle\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.037390 master-2 kubenswrapper[4776]: I1011 10:50:10.037349 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-bundle\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.037390 master-2 kubenswrapper[4776]: I1011 10:50:10.037345 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-util\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.060768 master-2 kubenswrapper[4776]: I1011 10:50:10.060734 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v48t\" (UniqueName: \"kubernetes.io/projected/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-kube-api-access-5v48t\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.109650 master-2 kubenswrapper[4776]: I1011 10:50:10.109576 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:10.528257 master-2 kubenswrapper[4776]: I1011 10:50:10.528204 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w"] Oct 11 10:50:10.534444 master-2 kubenswrapper[4776]: W1011 10:50:10.534386 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b3b7ba7_98af_44ea_b6da_9c37d9e1a6c7.slice/crio-b9c7ddb530d84b9909f6c52396407a73ff58e8ddb2c142a940d107ef62cb61f5 WatchSource:0}: Error finding container b9c7ddb530d84b9909f6c52396407a73ff58e8ddb2c142a940d107ef62cb61f5: Status 404 returned error can't find the container with id b9c7ddb530d84b9909f6c52396407a73ff58e8ddb2c142a940d107ef62cb61f5 Oct 11 10:50:10.632385 master-2 kubenswrapper[4776]: I1011 10:50:10.632256 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" event={"ID":"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7","Type":"ContainerStarted","Data":"b9c7ddb530d84b9909f6c52396407a73ff58e8ddb2c142a940d107ef62cb61f5"} Oct 11 10:50:10.634075 master-2 kubenswrapper[4776]: I1011 10:50:10.634029 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-2" event={"ID":"b0ae11ca-a8d5-4a55-9898-269dfe907446","Type":"ContainerStarted","Data":"97d2ffd60350f21f0391dcc9487188f6d1cbec2083573758ddac10b06f8b652f"} Oct 11 10:50:10.634140 master-2 kubenswrapper[4776]: I1011 10:50:10.634096 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-2" event={"ID":"b0ae11ca-a8d5-4a55-9898-269dfe907446","Type":"ContainerStarted","Data":"6bd4d19fed71de4022731fcbd86fc0592211b6138b640fa3bf6c9472b80af3af"} Oct 11 10:50:10.669245 master-2 kubenswrapper[4776]: I1011 10:50:10.669141 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-6-master-2" podStartSLOduration=2.669112873 podStartE2EDuration="2.669112873s" podCreationTimestamp="2025-10-11 10:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:50:10.662613797 +0000 UTC m=+1445.447040506" watchObservedRunningTime="2025-10-11 10:50:10.669112873 +0000 UTC m=+1445.453539622" Oct 11 10:50:11.645986 master-2 kubenswrapper[4776]: I1011 10:50:11.645852 4776 generic.go:334] "Generic (PLEG): container finished" podID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerID="4f921e436ab0f91cb17ee83e0c6c33d9542df9412303b9c24f03eca2d8428e93" exitCode=0 Oct 11 10:50:11.647053 master-2 kubenswrapper[4776]: I1011 10:50:11.646063 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" event={"ID":"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7","Type":"ContainerDied","Data":"4f921e436ab0f91cb17ee83e0c6c33d9542df9412303b9c24f03eca2d8428e93"} Oct 11 10:50:11.648535 master-2 kubenswrapper[4776]: I1011 10:50:11.647953 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:50:11.648535 master-2 kubenswrapper[4776]: I1011 10:50:11.648134 4776 generic.go:334] "Generic (PLEG): container finished" podID="b0ae11ca-a8d5-4a55-9898-269dfe907446" containerID="97d2ffd60350f21f0391dcc9487188f6d1cbec2083573758ddac10b06f8b652f" exitCode=0 Oct 11 10:50:11.648535 master-2 kubenswrapper[4776]: I1011 10:50:11.648193 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-2" event={"ID":"b0ae11ca-a8d5-4a55-9898-269dfe907446","Type":"ContainerDied","Data":"97d2ffd60350f21f0391dcc9487188f6d1cbec2083573758ddac10b06f8b652f"} Oct 11 10:50:13.070728 master-2 kubenswrapper[4776]: I1011 10:50:13.070606 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:13.101859 master-2 kubenswrapper[4776]: I1011 10:50:13.101798 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ae11ca-a8d5-4a55-9898-269dfe907446-kube-api-access\") pod \"b0ae11ca-a8d5-4a55-9898-269dfe907446\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " Oct 11 10:50:13.101859 master-2 kubenswrapper[4776]: I1011 10:50:13.101860 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ae11ca-a8d5-4a55-9898-269dfe907446-kubelet-dir\") pod \"b0ae11ca-a8d5-4a55-9898-269dfe907446\" (UID: \"b0ae11ca-a8d5-4a55-9898-269dfe907446\") " Oct 11 10:50:13.102545 master-2 kubenswrapper[4776]: I1011 10:50:13.102197 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ae11ca-a8d5-4a55-9898-269dfe907446-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b0ae11ca-a8d5-4a55-9898-269dfe907446" (UID: "b0ae11ca-a8d5-4a55-9898-269dfe907446"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:50:13.113967 master-2 kubenswrapper[4776]: I1011 10:50:13.113901 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ae11ca-a8d5-4a55-9898-269dfe907446-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b0ae11ca-a8d5-4a55-9898-269dfe907446" (UID: "b0ae11ca-a8d5-4a55-9898-269dfe907446"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:50:13.203603 master-2 kubenswrapper[4776]: I1011 10:50:13.203350 4776 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ae11ca-a8d5-4a55-9898-269dfe907446-kubelet-dir\") on node \"master-2\" DevicePath \"\"" Oct 11 10:50:13.203603 master-2 kubenswrapper[4776]: I1011 10:50:13.203398 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ae11ca-a8d5-4a55-9898-269dfe907446-kube-api-access\") on node \"master-2\" DevicePath \"\"" Oct 11 10:50:13.663876 master-2 kubenswrapper[4776]: I1011 10:50:13.663800 4776 generic.go:334] "Generic (PLEG): container finished" podID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerID="ed461758d15a346692f0fcd52aed862eab3e744e921a1f7a4e1926c605052e91" exitCode=0 Oct 11 10:50:13.664218 master-2 kubenswrapper[4776]: I1011 10:50:13.663898 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" event={"ID":"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7","Type":"ContainerDied","Data":"ed461758d15a346692f0fcd52aed862eab3e744e921a1f7a4e1926c605052e91"} Oct 11 10:50:13.665775 master-2 kubenswrapper[4776]: I1011 10:50:13.665740 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-2" event={"ID":"b0ae11ca-a8d5-4a55-9898-269dfe907446","Type":"ContainerDied","Data":"6bd4d19fed71de4022731fcbd86fc0592211b6138b640fa3bf6c9472b80af3af"} Oct 11 10:50:13.665775 master-2 kubenswrapper[4776]: I1011 10:50:13.665780 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bd4d19fed71de4022731fcbd86fc0592211b6138b640fa3bf6c9472b80af3af" Oct 11 10:50:13.665887 master-2 kubenswrapper[4776]: I1011 10:50:13.665802 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-2" Oct 11 10:50:14.676981 master-2 kubenswrapper[4776]: I1011 10:50:14.676880 4776 generic.go:334] "Generic (PLEG): container finished" podID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerID="2798ee030ae90bc44aeeeb787817932e921c9947d301d1bbd08e8d4c5d1dc632" exitCode=0 Oct 11 10:50:14.676981 master-2 kubenswrapper[4776]: I1011 10:50:14.676955 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" event={"ID":"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7","Type":"ContainerDied","Data":"2798ee030ae90bc44aeeeb787817932e921c9947d301d1bbd08e8d4c5d1dc632"} Oct 11 10:50:16.002844 master-2 kubenswrapper[4776]: I1011 10:50:16.002775 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:50:16.146256 master-2 kubenswrapper[4776]: I1011 10:50:16.146147 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v48t\" (UniqueName: \"kubernetes.io/projected/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-kube-api-access-5v48t\") pod \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " Oct 11 10:50:16.146519 master-2 kubenswrapper[4776]: I1011 10:50:16.146307 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-util\") pod \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " Oct 11 10:50:16.146519 master-2 kubenswrapper[4776]: I1011 10:50:16.146364 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-bundle\") pod \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\" (UID: \"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7\") " Oct 11 10:50:16.148042 master-2 kubenswrapper[4776]: I1011 10:50:16.147996 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-bundle" (OuterVolumeSpecName: "bundle") pod "0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" (UID: "0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:50:16.150344 master-2 kubenswrapper[4776]: I1011 10:50:16.150288 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-kube-api-access-5v48t" (OuterVolumeSpecName: "kube-api-access-5v48t") pod "0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" (UID: "0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7"). InnerVolumeSpecName "kube-api-access-5v48t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:50:16.161637 master-2 kubenswrapper[4776]: I1011 10:50:16.161576 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-util" (OuterVolumeSpecName: "util") pod "0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" (UID: "0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:50:16.249299 master-2 kubenswrapper[4776]: I1011 10:50:16.249122 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v48t\" (UniqueName: \"kubernetes.io/projected/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-kube-api-access-5v48t\") on node \"master-2\" DevicePath \"\"" Oct 11 10:50:16.249299 master-2 kubenswrapper[4776]: I1011 10:50:16.249187 4776 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-util\") on node \"master-2\" DevicePath \"\"" Oct 11 10:50:16.249299 master-2 kubenswrapper[4776]: I1011 10:50:16.249208 4776 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:50:16.698774 master-2 kubenswrapper[4776]: I1011 10:50:16.698456 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" event={"ID":"0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7","Type":"ContainerDied","Data":"b9c7ddb530d84b9909f6c52396407a73ff58e8ddb2c142a940d107ef62cb61f5"} Oct 11 10:50:16.698774 master-2 kubenswrapper[4776]: I1011 10:50:16.698515 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c7ddb530d84b9909f6c52396407a73ff58e8ddb2c142a940d107ef62cb61f5" Oct 11 10:50:16.698774 master-2 kubenswrapper[4776]: I1011 10:50:16.698572 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w" Oct 11 10:51:56.767822 master-2 kubenswrapper[4776]: I1011 10:51:56.767768 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq"] Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: E1011 10:51:56.768016 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ae11ca-a8d5-4a55-9898-269dfe907446" containerName="pruner" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768028 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ae11ca-a8d5-4a55-9898-269dfe907446" containerName="pruner" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: E1011 10:51:56.768037 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="util" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768043 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="util" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: E1011 10:51:56.768061 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="extract" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768066 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="extract" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: E1011 10:51:56.768076 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="pull" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768081 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="pull" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768182 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3b7ba7-98af-44ea-b6da-9c37d9e1a6c7" containerName="extract" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768193 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ae11ca-a8d5-4a55-9898-269dfe907446" containerName="pruner" Oct 11 10:51:56.771109 master-2 kubenswrapper[4776]: I1011 10:51:56.768888 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:51:56.772012 master-2 kubenswrapper[4776]: I1011 10:51:56.771983 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 11 10:51:56.772062 master-2 kubenswrapper[4776]: I1011 10:51:56.772014 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 11 10:51:56.806697 master-2 kubenswrapper[4776]: I1011 10:51:56.804739 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq"] Oct 11 10:51:56.822702 master-2 kubenswrapper[4776]: I1011 10:51:56.822057 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl"] Oct 11 10:51:56.824822 master-2 kubenswrapper[4776]: I1011 10:51:56.823531 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:51:56.850082 master-2 kubenswrapper[4776]: I1011 10:51:56.850049 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl"] Oct 11 10:51:56.869994 master-2 kubenswrapper[4776]: I1011 10:51:56.867597 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxsgf\" (UniqueName: \"kubernetes.io/projected/6892e393-3308-4454-90bc-06af6038c240-kube-api-access-dxsgf\") pod \"cinder-operator-controller-manager-5484486656-rw2pq\" (UID: \"6892e393-3308-4454-90bc-06af6038c240\") " pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:51:56.870842 master-2 kubenswrapper[4776]: I1011 10:51:56.870779 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgsjl\" (UniqueName: \"kubernetes.io/projected/81dbec9a-863f-4698-a04b-2fd7e6bb2a02-kube-api-access-wgsjl\") pod \"designate-operator-controller-manager-67d84b9cc-fxdhl\" (UID: \"81dbec9a-863f-4698-a04b-2fd7e6bb2a02\") " pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:51:56.885602 master-2 kubenswrapper[4776]: I1011 10:51:56.885522 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb"] Oct 11 10:51:56.888525 master-2 kubenswrapper[4776]: I1011 10:51:56.888465 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:51:56.908832 master-2 kubenswrapper[4776]: I1011 10:51:56.908753 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb"] Oct 11 10:51:56.972859 master-2 kubenswrapper[4776]: I1011 10:51:56.972773 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxsgf\" (UniqueName: \"kubernetes.io/projected/6892e393-3308-4454-90bc-06af6038c240-kube-api-access-dxsgf\") pod \"cinder-operator-controller-manager-5484486656-rw2pq\" (UID: \"6892e393-3308-4454-90bc-06af6038c240\") " pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:51:56.972859 master-2 kubenswrapper[4776]: I1011 10:51:56.972844 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4wp6\" (UniqueName: \"kubernetes.io/projected/0ae13eec-2496-4fc6-a6e1-db8b75944959-kube-api-access-p4wp6\") pod \"glance-operator-controller-manager-59bd97c6b9-kmrbb\" (UID: \"0ae13eec-2496-4fc6-a6e1-db8b75944959\") " pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:51:56.973258 master-2 kubenswrapper[4776]: I1011 10:51:56.972884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgsjl\" (UniqueName: \"kubernetes.io/projected/81dbec9a-863f-4698-a04b-2fd7e6bb2a02-kube-api-access-wgsjl\") pod \"designate-operator-controller-manager-67d84b9cc-fxdhl\" (UID: \"81dbec9a-863f-4698-a04b-2fd7e6bb2a02\") " pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:51:56.995071 master-2 kubenswrapper[4776]: I1011 10:51:56.994637 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv"] Oct 11 10:51:56.995771 master-2 kubenswrapper[4776]: I1011 10:51:56.995584 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:51:57.021266 master-2 kubenswrapper[4776]: I1011 10:51:57.021135 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgsjl\" (UniqueName: \"kubernetes.io/projected/81dbec9a-863f-4698-a04b-2fd7e6bb2a02-kube-api-access-wgsjl\") pod \"designate-operator-controller-manager-67d84b9cc-fxdhl\" (UID: \"81dbec9a-863f-4698-a04b-2fd7e6bb2a02\") " pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:51:57.027115 master-2 kubenswrapper[4776]: I1011 10:51:57.025000 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv"] Oct 11 10:51:57.031840 master-2 kubenswrapper[4776]: I1011 10:51:57.029883 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxsgf\" (UniqueName: \"kubernetes.io/projected/6892e393-3308-4454-90bc-06af6038c240-kube-api-access-dxsgf\") pod \"cinder-operator-controller-manager-5484486656-rw2pq\" (UID: \"6892e393-3308-4454-90bc-06af6038c240\") " pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:51:57.040110 master-2 kubenswrapper[4776]: I1011 10:51:57.039406 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4"] Oct 11 10:51:57.041184 master-2 kubenswrapper[4776]: I1011 10:51:57.041099 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:51:57.054394 master-2 kubenswrapper[4776]: I1011 10:51:57.054343 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4"] Oct 11 10:51:57.073769 master-2 kubenswrapper[4776]: I1011 10:51:57.073705 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42bbp\" (UniqueName: \"kubernetes.io/projected/aef1b152-b3f9-4e71-acd5-912ea87347e5-kube-api-access-42bbp\") pod \"manila-operator-controller-manager-6d78f57554-k69p4\" (UID: \"aef1b152-b3f9-4e71-acd5-912ea87347e5\") " pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:51:57.073981 master-2 kubenswrapper[4776]: I1011 10:51:57.073828 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms6ck\" (UniqueName: \"kubernetes.io/projected/77ea22b9-4ddd-47e0-8d6b-33a046ec10fa-kube-api-access-ms6ck\") pod \"keystone-operator-controller-manager-f4487c759-5ktpv\" (UID: \"77ea22b9-4ddd-47e0-8d6b-33a046ec10fa\") " pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:51:57.073981 master-2 kubenswrapper[4776]: I1011 10:51:57.073858 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4wp6\" (UniqueName: \"kubernetes.io/projected/0ae13eec-2496-4fc6-a6e1-db8b75944959-kube-api-access-p4wp6\") pod \"glance-operator-controller-manager-59bd97c6b9-kmrbb\" (UID: \"0ae13eec-2496-4fc6-a6e1-db8b75944959\") " pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:51:57.091057 master-2 kubenswrapper[4776]: I1011 10:51:57.091025 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:51:57.113037 master-2 kubenswrapper[4776]: I1011 10:51:57.112984 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576"] Oct 11 10:51:57.113841 master-2 kubenswrapper[4776]: I1011 10:51:57.113817 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4wp6\" (UniqueName: \"kubernetes.io/projected/0ae13eec-2496-4fc6-a6e1-db8b75944959-kube-api-access-p4wp6\") pod \"glance-operator-controller-manager-59bd97c6b9-kmrbb\" (UID: \"0ae13eec-2496-4fc6-a6e1-db8b75944959\") " pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:51:57.115058 master-2 kubenswrapper[4776]: I1011 10:51:57.114743 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:51:57.118900 master-2 kubenswrapper[4776]: I1011 10:51:57.116988 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576"] Oct 11 10:51:57.175884 master-2 kubenswrapper[4776]: I1011 10:51:57.175820 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ddhx\" (UniqueName: \"kubernetes.io/projected/dfa45a38-a374-4106-80ff-49527e765f82-kube-api-access-7ddhx\") pod \"neutron-operator-controller-manager-7c95684bcc-vt576\" (UID: \"dfa45a38-a374-4106-80ff-49527e765f82\") " pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:51:57.176351 master-2 kubenswrapper[4776]: I1011 10:51:57.175995 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6ck\" (UniqueName: \"kubernetes.io/projected/77ea22b9-4ddd-47e0-8d6b-33a046ec10fa-kube-api-access-ms6ck\") pod \"keystone-operator-controller-manager-f4487c759-5ktpv\" (UID: \"77ea22b9-4ddd-47e0-8d6b-33a046ec10fa\") " pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:51:57.176351 master-2 kubenswrapper[4776]: I1011 10:51:57.176039 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42bbp\" (UniqueName: \"kubernetes.io/projected/aef1b152-b3f9-4e71-acd5-912ea87347e5-kube-api-access-42bbp\") pod \"manila-operator-controller-manager-6d78f57554-k69p4\" (UID: \"aef1b152-b3f9-4e71-acd5-912ea87347e5\") " pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:51:57.216202 master-2 kubenswrapper[4776]: I1011 10:51:57.216134 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42bbp\" (UniqueName: \"kubernetes.io/projected/aef1b152-b3f9-4e71-acd5-912ea87347e5-kube-api-access-42bbp\") pod \"manila-operator-controller-manager-6d78f57554-k69p4\" (UID: \"aef1b152-b3f9-4e71-acd5-912ea87347e5\") " pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:51:57.228978 master-2 kubenswrapper[4776]: I1011 10:51:57.228913 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:51:57.232870 master-2 kubenswrapper[4776]: I1011 10:51:57.232599 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6ck\" (UniqueName: \"kubernetes.io/projected/77ea22b9-4ddd-47e0-8d6b-33a046ec10fa-kube-api-access-ms6ck\") pod \"keystone-operator-controller-manager-f4487c759-5ktpv\" (UID: \"77ea22b9-4ddd-47e0-8d6b-33a046ec10fa\") " pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:51:57.264744 master-2 kubenswrapper[4776]: I1011 10:51:57.264653 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:51:57.277834 master-2 kubenswrapper[4776]: I1011 10:51:57.277607 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ddhx\" (UniqueName: \"kubernetes.io/projected/dfa45a38-a374-4106-80ff-49527e765f82-kube-api-access-7ddhx\") pod \"neutron-operator-controller-manager-7c95684bcc-vt576\" (UID: \"dfa45a38-a374-4106-80ff-49527e765f82\") " pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:51:57.318765 master-2 kubenswrapper[4776]: I1011 10:51:57.318184 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:51:57.326181 master-2 kubenswrapper[4776]: I1011 10:51:57.326131 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g"] Oct 11 10:51:57.334308 master-2 kubenswrapper[4776]: I1011 10:51:57.334244 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ddhx\" (UniqueName: \"kubernetes.io/projected/dfa45a38-a374-4106-80ff-49527e765f82-kube-api-access-7ddhx\") pod \"neutron-operator-controller-manager-7c95684bcc-vt576\" (UID: \"dfa45a38-a374-4106-80ff-49527e765f82\") " pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:51:57.335485 master-2 kubenswrapper[4776]: I1011 10:51:57.335441 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:51:57.380207 master-2 kubenswrapper[4776]: I1011 10:51:57.380059 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddgpv\" (UniqueName: \"kubernetes.io/projected/b4fb1c59-06e0-48e3-a428-e104afe4c0f7-kube-api-access-ddgpv\") pod \"swift-operator-controller-manager-6d4f9d7767-x9x4g\" (UID: \"b4fb1c59-06e0-48e3-a428-e104afe4c0f7\") " pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:51:57.392837 master-2 kubenswrapper[4776]: I1011 10:51:57.390006 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:51:57.401171 master-2 kubenswrapper[4776]: I1011 10:51:57.400170 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g"] Oct 11 10:51:57.453187 master-2 kubenswrapper[4776]: I1011 10:51:57.453135 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x"] Oct 11 10:51:57.454664 master-2 kubenswrapper[4776]: I1011 10:51:57.454633 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:51:57.461406 master-2 kubenswrapper[4776]: I1011 10:51:57.461367 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:51:57.470400 master-2 kubenswrapper[4776]: I1011 10:51:57.470350 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x"] Oct 11 10:51:57.490242 master-2 kubenswrapper[4776]: I1011 10:51:57.490202 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddgpv\" (UniqueName: \"kubernetes.io/projected/b4fb1c59-06e0-48e3-a428-e104afe4c0f7-kube-api-access-ddgpv\") pod \"swift-operator-controller-manager-6d4f9d7767-x9x4g\" (UID: \"b4fb1c59-06e0-48e3-a428-e104afe4c0f7\") " pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:51:57.533082 master-2 kubenswrapper[4776]: I1011 10:51:57.532908 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddgpv\" (UniqueName: \"kubernetes.io/projected/b4fb1c59-06e0-48e3-a428-e104afe4c0f7-kube-api-access-ddgpv\") pod \"swift-operator-controller-manager-6d4f9d7767-x9x4g\" (UID: \"b4fb1c59-06e0-48e3-a428-e104afe4c0f7\") " pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:51:57.593791 master-2 kubenswrapper[4776]: I1011 10:51:57.593734 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fv96\" (UniqueName: \"kubernetes.io/projected/76a4deee-4f6c-4088-9a56-4d9141924af2-kube-api-access-4fv96\") pod \"watcher-operator-controller-manager-7c4579d8cf-ttj8x\" (UID: \"76a4deee-4f6c-4088-9a56-4d9141924af2\") " pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:51:57.598190 master-2 kubenswrapper[4776]: I1011 10:51:57.598150 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq"] Oct 11 10:51:57.652443 master-2 kubenswrapper[4776]: I1011 10:51:57.652383 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:51:57.696650 master-2 kubenswrapper[4776]: I1011 10:51:57.696549 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fv96\" (UniqueName: \"kubernetes.io/projected/76a4deee-4f6c-4088-9a56-4d9141924af2-kube-api-access-4fv96\") pod \"watcher-operator-controller-manager-7c4579d8cf-ttj8x\" (UID: \"76a4deee-4f6c-4088-9a56-4d9141924af2\") " pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:51:57.740257 master-2 kubenswrapper[4776]: I1011 10:51:57.740161 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl"] Oct 11 10:51:57.768720 master-2 kubenswrapper[4776]: I1011 10:51:57.768470 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fv96\" (UniqueName: \"kubernetes.io/projected/76a4deee-4f6c-4088-9a56-4d9141924af2-kube-api-access-4fv96\") pod \"watcher-operator-controller-manager-7c4579d8cf-ttj8x\" (UID: \"76a4deee-4f6c-4088-9a56-4d9141924af2\") " pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:51:57.777305 master-2 kubenswrapper[4776]: I1011 10:51:57.777256 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:51:57.869025 master-2 kubenswrapper[4776]: I1011 10:51:57.868983 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb"] Oct 11 10:51:57.964571 master-2 kubenswrapper[4776]: I1011 10:51:57.964500 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv"] Oct 11 10:51:58.108462 master-2 kubenswrapper[4776]: I1011 10:51:58.108410 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4"] Oct 11 10:51:58.113019 master-2 kubenswrapper[4776]: I1011 10:51:58.112968 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576"] Oct 11 10:51:58.314461 master-2 kubenswrapper[4776]: I1011 10:51:58.314424 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g"] Oct 11 10:51:58.319338 master-2 kubenswrapper[4776]: W1011 10:51:58.319217 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4fb1c59_06e0_48e3_a428_e104afe4c0f7.slice/crio-e10964b154779710e5b10dd7be9c7ec4207bb3a0a1a71351a63412c235711473 WatchSource:0}: Error finding container e10964b154779710e5b10dd7be9c7ec4207bb3a0a1a71351a63412c235711473: Status 404 returned error can't find the container with id e10964b154779710e5b10dd7be9c7ec4207bb3a0a1a71351a63412c235711473 Oct 11 10:51:58.438955 master-2 kubenswrapper[4776]: I1011 10:51:58.438896 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x"] Oct 11 10:51:58.441323 master-2 kubenswrapper[4776]: W1011 10:51:58.441275 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76a4deee_4f6c_4088_9a56_4d9141924af2.slice/crio-480ea18a67d12af6346cdf9814ceec6587b27be17f11d40b3b80288f3b37b4a3 WatchSource:0}: Error finding container 480ea18a67d12af6346cdf9814ceec6587b27be17f11d40b3b80288f3b37b4a3: Status 404 returned error can't find the container with id 480ea18a67d12af6346cdf9814ceec6587b27be17f11d40b3b80288f3b37b4a3 Oct 11 10:51:58.506730 master-2 kubenswrapper[4776]: I1011 10:51:58.506666 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" event={"ID":"b4fb1c59-06e0-48e3-a428-e104afe4c0f7","Type":"ContainerStarted","Data":"e10964b154779710e5b10dd7be9c7ec4207bb3a0a1a71351a63412c235711473"} Oct 11 10:51:58.508874 master-2 kubenswrapper[4776]: I1011 10:51:58.508713 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" event={"ID":"0ae13eec-2496-4fc6-a6e1-db8b75944959","Type":"ContainerStarted","Data":"445d7dca6c4f95b76f060cbfd085b83f0586f8444df7d20ef904a68c1d3fb21f"} Oct 11 10:51:58.510061 master-2 kubenswrapper[4776]: I1011 10:51:58.510026 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" event={"ID":"76a4deee-4f6c-4088-9a56-4d9141924af2","Type":"ContainerStarted","Data":"480ea18a67d12af6346cdf9814ceec6587b27be17f11d40b3b80288f3b37b4a3"} Oct 11 10:51:58.511194 master-2 kubenswrapper[4776]: I1011 10:51:58.511153 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" event={"ID":"dfa45a38-a374-4106-80ff-49527e765f82","Type":"ContainerStarted","Data":"c469615d7bd87268909321c5098b3faeb326076e290d36e967ecd0e9b0fb3191"} Oct 11 10:51:58.512251 master-2 kubenswrapper[4776]: I1011 10:51:58.512226 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" event={"ID":"77ea22b9-4ddd-47e0-8d6b-33a046ec10fa","Type":"ContainerStarted","Data":"42535b5d5ed832d2129ea582ffaf02ba02b14faf51b4e26ed245811335e92256"} Oct 11 10:51:58.513516 master-2 kubenswrapper[4776]: I1011 10:51:58.513302 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" event={"ID":"81dbec9a-863f-4698-a04b-2fd7e6bb2a02","Type":"ContainerStarted","Data":"1e79ce35fbd382137739b7787ca5ecbfc1355f62841bbb8f6ea27184b5230a4e"} Oct 11 10:51:58.515216 master-2 kubenswrapper[4776]: I1011 10:51:58.515132 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" event={"ID":"aef1b152-b3f9-4e71-acd5-912ea87347e5","Type":"ContainerStarted","Data":"a99a07e811537522836ee684f7c91b124316af68e62ed9c5c2d02daa8288371e"} Oct 11 10:51:58.516443 master-2 kubenswrapper[4776]: I1011 10:51:58.516420 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" event={"ID":"6892e393-3308-4454-90bc-06af6038c240","Type":"ContainerStarted","Data":"2325dbc1a300f4db8e2127277016cc7fe6292252edd8b7bf8c08cf345ac13ea2"} Oct 11 10:52:04.582068 master-2 kubenswrapper[4776]: I1011 10:52:04.582010 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" event={"ID":"aef1b152-b3f9-4e71-acd5-912ea87347e5","Type":"ContainerStarted","Data":"948fde857a9b381c7a814421c3a5d3187f91eff1b951b476ce627aeba5ba177a"} Oct 11 10:52:04.583922 master-2 kubenswrapper[4776]: I1011 10:52:04.583896 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" event={"ID":"6892e393-3308-4454-90bc-06af6038c240","Type":"ContainerStarted","Data":"1d671761c34b9ecaf265bfd7d4b1f3aabe17afed30964db486fa8c1a554ed3ba"} Oct 11 10:52:04.585684 master-2 kubenswrapper[4776]: I1011 10:52:04.585624 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" event={"ID":"b4fb1c59-06e0-48e3-a428-e104afe4c0f7","Type":"ContainerStarted","Data":"96744c11337202cf7be38e2fb10d292699ebb27db69b3217414380ce0b790a29"} Oct 11 10:52:04.587124 master-2 kubenswrapper[4776]: I1011 10:52:04.587090 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" event={"ID":"0ae13eec-2496-4fc6-a6e1-db8b75944959","Type":"ContainerStarted","Data":"eb90e6b27fb7a6d629a88c3113680bf9faa957d756a3690989b4bf180b9a30b3"} Oct 11 10:52:04.588379 master-2 kubenswrapper[4776]: I1011 10:52:04.588346 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" event={"ID":"76a4deee-4f6c-4088-9a56-4d9141924af2","Type":"ContainerStarted","Data":"543f42e3e61d9c62a18c7dcbbe3c9db42b1cf7cd340a01f4b44ecf8b32e9b804"} Oct 11 10:52:04.590128 master-2 kubenswrapper[4776]: I1011 10:52:04.590094 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" event={"ID":"dfa45a38-a374-4106-80ff-49527e765f82","Type":"ContainerStarted","Data":"ae4559afb3f3421f36a8afff29e46fdbdf54e58d5c26a19c2af812f7f8b82878"} Oct 11 10:52:04.591819 master-2 kubenswrapper[4776]: I1011 10:52:04.591781 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" event={"ID":"77ea22b9-4ddd-47e0-8d6b-33a046ec10fa","Type":"ContainerStarted","Data":"d471fe633dbfb8b86b2c7f1efd6611b02ce4213b4a2e19634d2465a91e10a94a"} Oct 11 10:52:04.593701 master-2 kubenswrapper[4776]: I1011 10:52:04.593652 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" event={"ID":"81dbec9a-863f-4698-a04b-2fd7e6bb2a02","Type":"ContainerStarted","Data":"d6b8c3b8a3bbb1a280c571e349c76452d0cda627f0295661e0a951c3cc71596c"} Oct 11 10:52:07.646558 master-2 kubenswrapper[4776]: I1011 10:52:07.646014 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" event={"ID":"aef1b152-b3f9-4e71-acd5-912ea87347e5","Type":"ContainerStarted","Data":"9ce7f1114634f4d6c97c8fbe69f683adaae4793216aa18c8161422d64ed02b50"} Oct 11 10:52:07.650459 master-2 kubenswrapper[4776]: I1011 10:52:07.650397 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:52:07.651720 master-2 kubenswrapper[4776]: I1011 10:52:07.651625 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" event={"ID":"6892e393-3308-4454-90bc-06af6038c240","Type":"ContainerStarted","Data":"bcc08fdcbef1a487f0fed1e55c87510747ed53ed96624ef310d4afdd03aea0b3"} Oct 11 10:52:07.652343 master-2 kubenswrapper[4776]: I1011 10:52:07.652301 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:52:07.659889 master-2 kubenswrapper[4776]: I1011 10:52:07.659825 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" event={"ID":"b4fb1c59-06e0-48e3-a428-e104afe4c0f7","Type":"ContainerStarted","Data":"bde459dfdf435c9a56a96c3f0be185ef523f2170e67a79271541516a08168379"} Oct 11 10:52:07.660358 master-2 kubenswrapper[4776]: I1011 10:52:07.660051 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:52:07.673695 master-2 kubenswrapper[4776]: I1011 10:52:07.672130 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" event={"ID":"0ae13eec-2496-4fc6-a6e1-db8b75944959","Type":"ContainerStarted","Data":"89b5e8fb20c4760a8d4090642dbe825dd95de845610bbbb0ad21679c9293f405"} Oct 11 10:52:07.673695 master-2 kubenswrapper[4776]: I1011 10:52:07.672850 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:52:07.677700 master-2 kubenswrapper[4776]: I1011 10:52:07.677007 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" podStartSLOduration=1.403423979 podStartE2EDuration="10.676992456s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.115036845 +0000 UTC m=+1552.899463554" lastFinishedPulling="2025-10-11 10:52:07.388605312 +0000 UTC m=+1562.173032031" observedRunningTime="2025-10-11 10:52:07.67308711 +0000 UTC m=+1562.457513819" watchObservedRunningTime="2025-10-11 10:52:07.676992456 +0000 UTC m=+1562.461419165" Oct 11 10:52:07.679775 master-2 kubenswrapper[4776]: I1011 10:52:07.678013 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" event={"ID":"76a4deee-4f6c-4088-9a56-4d9141924af2","Type":"ContainerStarted","Data":"478410317ac13077a7c605a6091139aec4ff51e943162d71f7406ce54d4b9109"} Oct 11 10:52:07.679775 master-2 kubenswrapper[4776]: I1011 10:52:07.678222 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:52:07.684101 master-2 kubenswrapper[4776]: I1011 10:52:07.682746 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" event={"ID":"77ea22b9-4ddd-47e0-8d6b-33a046ec10fa","Type":"ContainerStarted","Data":"86ea8366d381d19c1cb4cd566560ff22017f428230def887949ac97e54c275ee"} Oct 11 10:52:07.684101 master-2 kubenswrapper[4776]: I1011 10:52:07.683958 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:52:07.700709 master-2 kubenswrapper[4776]: I1011 10:52:07.700385 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" event={"ID":"81dbec9a-863f-4698-a04b-2fd7e6bb2a02","Type":"ContainerStarted","Data":"908dc1309168d6400346bf478385da8d5671667f824ce8bf4f01027f08088509"} Oct 11 10:52:07.701702 master-2 kubenswrapper[4776]: I1011 10:52:07.701196 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:52:07.741519 master-2 kubenswrapper[4776]: I1011 10:52:07.741457 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" podStartSLOduration=2.9663799539999998 podStartE2EDuration="11.741440003s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.878575649 +0000 UTC m=+1552.663002358" lastFinishedPulling="2025-10-11 10:52:06.653635698 +0000 UTC m=+1561.438062407" observedRunningTime="2025-10-11 10:52:07.719478137 +0000 UTC m=+1562.503904856" watchObservedRunningTime="2025-10-11 10:52:07.741440003 +0000 UTC m=+1562.525866712" Oct 11 10:52:07.743311 master-2 kubenswrapper[4776]: I1011 10:52:07.743279 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" podStartSLOduration=2.40693587 podStartE2EDuration="10.743273522s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.322032704 +0000 UTC m=+1553.106459413" lastFinishedPulling="2025-10-11 10:52:06.658370356 +0000 UTC m=+1561.442797065" observedRunningTime="2025-10-11 10:52:07.741077562 +0000 UTC m=+1562.525504271" watchObservedRunningTime="2025-10-11 10:52:07.743273522 +0000 UTC m=+1562.527700221" Oct 11 10:52:07.770133 master-2 kubenswrapper[4776]: I1011 10:52:07.770071 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" podStartSLOduration=1.8538699699999999 podStartE2EDuration="11.770056518s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.575385194 +0000 UTC m=+1552.359811903" lastFinishedPulling="2025-10-11 10:52:07.491571742 +0000 UTC m=+1562.275998451" observedRunningTime="2025-10-11 10:52:07.768943657 +0000 UTC m=+1562.553370366" watchObservedRunningTime="2025-10-11 10:52:07.770056518 +0000 UTC m=+1562.554483227" Oct 11 10:52:07.793636 master-2 kubenswrapper[4776]: I1011 10:52:07.793551 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" podStartSLOduration=2.5345846979999997 podStartE2EDuration="10.793532603s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.444275857 +0000 UTC m=+1553.228702566" lastFinishedPulling="2025-10-11 10:52:06.703223752 +0000 UTC m=+1561.487650471" observedRunningTime="2025-10-11 10:52:07.790045519 +0000 UTC m=+1562.574472228" watchObservedRunningTime="2025-10-11 10:52:07.793532603 +0000 UTC m=+1562.577959322" Oct 11 10:52:07.996067 master-2 kubenswrapper[4776]: I1011 10:52:07.995983 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" podStartSLOduration=2.517443329 podStartE2EDuration="11.995968198s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.973245274 +0000 UTC m=+1552.757671983" lastFinishedPulling="2025-10-11 10:52:07.451770143 +0000 UTC m=+1562.236196852" observedRunningTime="2025-10-11 10:52:07.990975054 +0000 UTC m=+1562.775401763" watchObservedRunningTime="2025-10-11 10:52:07.995968198 +0000 UTC m=+1562.780394907" Oct 11 10:52:07.997957 master-2 kubenswrapper[4776]: I1011 10:52:07.997915 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" podStartSLOduration=2.4103041960000002 podStartE2EDuration="11.997906851s" podCreationTimestamp="2025-10-11 10:51:56 +0000 UTC" firstStartedPulling="2025-10-11 10:51:57.768651121 +0000 UTC m=+1552.553077830" lastFinishedPulling="2025-10-11 10:52:07.356253776 +0000 UTC m=+1562.140680485" observedRunningTime="2025-10-11 10:52:07.817288978 +0000 UTC m=+1562.601715687" watchObservedRunningTime="2025-10-11 10:52:07.997906851 +0000 UTC m=+1562.782333560" Oct 11 10:52:08.711800 master-2 kubenswrapper[4776]: I1011 10:52:08.711369 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" event={"ID":"dfa45a38-a374-4106-80ff-49527e765f82","Type":"ContainerStarted","Data":"48d8fd0f9b192d7e8cb770d76ad9e527ef68340c54c99515e2b41dea8b8c9ea6"} Oct 11 10:52:08.714421 master-2 kubenswrapper[4776]: I1011 10:52:08.713859 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq" Oct 11 10:52:08.715372 master-2 kubenswrapper[4776]: I1011 10:52:08.715316 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x" Oct 11 10:52:08.715846 master-2 kubenswrapper[4776]: I1011 10:52:08.715721 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl" Oct 11 10:52:08.716480 master-2 kubenswrapper[4776]: I1011 10:52:08.716099 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4" Oct 11 10:52:08.717167 master-2 kubenswrapper[4776]: I1011 10:52:08.717080 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb" Oct 11 10:52:08.717486 master-2 kubenswrapper[4776]: I1011 10:52:08.717449 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g" Oct 11 10:52:08.719010 master-2 kubenswrapper[4776]: I1011 10:52:08.718915 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv" Oct 11 10:52:09.073244 master-2 kubenswrapper[4776]: I1011 10:52:09.073076 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" podStartSLOduration=2.641837474 podStartE2EDuration="12.073048782s" podCreationTimestamp="2025-10-11 10:51:57 +0000 UTC" firstStartedPulling="2025-10-11 10:51:58.140385072 +0000 UTC m=+1552.924811791" lastFinishedPulling="2025-10-11 10:52:07.57159639 +0000 UTC m=+1562.356023099" observedRunningTime="2025-10-11 10:52:09.033419088 +0000 UTC m=+1563.817845837" watchObservedRunningTime="2025-10-11 10:52:09.073048782 +0000 UTC m=+1563.857475531" Oct 11 10:52:09.754633 master-2 kubenswrapper[4776]: I1011 10:52:09.750395 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:52:09.772710 master-2 kubenswrapper[4776]: I1011 10:52:09.763025 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576" Oct 11 10:52:48.149703 master-2 kubenswrapper[4776]: I1011 10:52:48.149226 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-l5ldp"] Oct 11 10:52:48.157700 master-2 kubenswrapper[4776]: I1011 10:52:48.150526 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.161695 master-2 kubenswrapper[4776]: I1011 10:52:48.159720 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-l5ldp"] Oct 11 10:52:48.176704 master-2 kubenswrapper[4776]: I1011 10:52:48.170169 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:52:48.176704 master-2 kubenswrapper[4776]: I1011 10:52:48.170399 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 11 10:52:48.176704 master-2 kubenswrapper[4776]: I1011 10:52:48.170422 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 11 10:52:48.252707 master-2 kubenswrapper[4776]: I1011 10:52:48.252507 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htpgt\" (UniqueName: \"kubernetes.io/projected/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-kube-api-access-htpgt\") pod \"dnsmasq-dns-5fd846fcd9-l5ldp\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.252707 master-2 kubenswrapper[4776]: I1011 10:52:48.252594 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-config\") pod \"dnsmasq-dns-5fd846fcd9-l5ldp\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.364692 master-2 kubenswrapper[4776]: I1011 10:52:48.364597 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htpgt\" (UniqueName: \"kubernetes.io/projected/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-kube-api-access-htpgt\") pod \"dnsmasq-dns-5fd846fcd9-l5ldp\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.364921 master-2 kubenswrapper[4776]: I1011 10:52:48.364823 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-config\") pod \"dnsmasq-dns-5fd846fcd9-l5ldp\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.365882 master-2 kubenswrapper[4776]: I1011 10:52:48.365846 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-config\") pod \"dnsmasq-dns-5fd846fcd9-l5ldp\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.383091 master-2 kubenswrapper[4776]: I1011 10:52:48.383038 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htpgt\" (UniqueName: \"kubernetes.io/projected/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-kube-api-access-htpgt\") pod \"dnsmasq-dns-5fd846fcd9-l5ldp\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.578765 master-2 kubenswrapper[4776]: I1011 10:52:48.578656 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:52:48.980301 master-2 kubenswrapper[4776]: I1011 10:52:48.980260 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-l5ldp"] Oct 11 10:52:48.984261 master-2 kubenswrapper[4776]: W1011 10:52:48.984217 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod977f2c8c_eb07_4fb7_ae7e_6d0688c6081f.slice/crio-bcff671aa0831c6673eae62f2a6c1c6fa0565bd88455b6fc9735c4678ae0771e WatchSource:0}: Error finding container bcff671aa0831c6673eae62f2a6c1c6fa0565bd88455b6fc9735c4678ae0771e: Status 404 returned error can't find the container with id bcff671aa0831c6673eae62f2a6c1c6fa0565bd88455b6fc9735c4678ae0771e Oct 11 10:52:49.082074 master-2 kubenswrapper[4776]: I1011 10:52:49.081981 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" event={"ID":"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f","Type":"ContainerStarted","Data":"bcff671aa0831c6673eae62f2a6c1c6fa0565bd88455b6fc9735c4678ae0771e"} Oct 11 10:52:51.075149 master-2 kubenswrapper[4776]: I1011 10:52:51.075039 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-l5ldp"] Oct 11 10:52:51.135896 master-2 kubenswrapper[4776]: I1011 10:52:51.135590 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-rbc2v"] Oct 11 10:52:51.136824 master-2 kubenswrapper[4776]: I1011 10:52:51.136794 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.139578 master-2 kubenswrapper[4776]: I1011 10:52:51.139550 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:52:51.141796 master-2 kubenswrapper[4776]: I1011 10:52:51.141228 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-rbc2v"] Oct 11 10:52:51.203777 master-2 kubenswrapper[4776]: I1011 10:52:51.203729 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnm67\" (UniqueName: \"kubernetes.io/projected/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-kube-api-access-vnm67\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.203985 master-2 kubenswrapper[4776]: I1011 10:52:51.203944 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-dns-svc\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.204155 master-2 kubenswrapper[4776]: I1011 10:52:51.204133 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-config\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.306961 master-2 kubenswrapper[4776]: I1011 10:52:51.306757 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-config\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.306961 master-2 kubenswrapper[4776]: I1011 10:52:51.306863 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnm67\" (UniqueName: \"kubernetes.io/projected/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-kube-api-access-vnm67\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.306961 master-2 kubenswrapper[4776]: I1011 10:52:51.306939 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-dns-svc\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.310426 master-2 kubenswrapper[4776]: I1011 10:52:51.308947 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-dns-svc\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.310426 master-2 kubenswrapper[4776]: I1011 10:52:51.309361 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-config\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.344756 master-2 kubenswrapper[4776]: I1011 10:52:51.338056 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnm67\" (UniqueName: \"kubernetes.io/projected/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-kube-api-access-vnm67\") pod \"dnsmasq-dns-86d565bb9-rbc2v\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:51.457255 master-2 kubenswrapper[4776]: I1011 10:52:51.457123 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:52:57.099702 master-2 kubenswrapper[4776]: I1011 10:52:57.099588 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:52:57.101255 master-2 kubenswrapper[4776]: I1011 10:52:57.100774 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 10:52:57.110738 master-2 kubenswrapper[4776]: I1011 10:52:57.110688 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:52:57.206862 master-2 kubenswrapper[4776]: I1011 10:52:57.206810 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm52p\" (UniqueName: \"kubernetes.io/projected/cbfedacb-2045-4297-be8f-3582dd2bcd7b-kube-api-access-cm52p\") pod \"kube-state-metrics-0\" (UID: \"cbfedacb-2045-4297-be8f-3582dd2bcd7b\") " pod="openstack/kube-state-metrics-0" Oct 11 10:52:57.310284 master-2 kubenswrapper[4776]: I1011 10:52:57.310116 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm52p\" (UniqueName: \"kubernetes.io/projected/cbfedacb-2045-4297-be8f-3582dd2bcd7b-kube-api-access-cm52p\") pod \"kube-state-metrics-0\" (UID: \"cbfedacb-2045-4297-be8f-3582dd2bcd7b\") " pod="openstack/kube-state-metrics-0" Oct 11 10:52:57.340932 master-2 kubenswrapper[4776]: I1011 10:52:57.340707 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm52p\" (UniqueName: \"kubernetes.io/projected/cbfedacb-2045-4297-be8f-3582dd2bcd7b-kube-api-access-cm52p\") pod \"kube-state-metrics-0\" (UID: \"cbfedacb-2045-4297-be8f-3582dd2bcd7b\") " pod="openstack/kube-state-metrics-0" Oct 11 10:52:57.460560 master-2 kubenswrapper[4776]: I1011 10:52:57.460495 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 10:52:57.900707 master-2 kubenswrapper[4776]: I1011 10:52:57.900571 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 11 10:52:57.905034 master-2 kubenswrapper[4776]: I1011 10:52:57.904920 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:57.909020 master-2 kubenswrapper[4776]: I1011 10:52:57.908984 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Oct 11 10:52:57.909205 master-2 kubenswrapper[4776]: I1011 10:52:57.909182 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Oct 11 10:52:57.909386 master-2 kubenswrapper[4776]: I1011 10:52:57.909364 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Oct 11 10:52:57.920564 master-2 kubenswrapper[4776]: I1011 10:52:57.920504 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 11 10:52:58.025496 master-2 kubenswrapper[4776]: I1011 10:52:58.025071 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8d885df4-18cc-401a-8226-cd3d17b3f770-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.025496 master-2 kubenswrapper[4776]: I1011 10:52:58.025140 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8d885df4-18cc-401a-8226-cd3d17b3f770-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.025496 master-2 kubenswrapper[4776]: I1011 10:52:58.025187 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8d885df4-18cc-401a-8226-cd3d17b3f770-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.025496 master-2 kubenswrapper[4776]: I1011 10:52:58.025216 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8d885df4-18cc-401a-8226-cd3d17b3f770-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.025496 master-2 kubenswrapper[4776]: I1011 10:52:58.025379 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48qkj\" (UniqueName: \"kubernetes.io/projected/8d885df4-18cc-401a-8226-cd3d17b3f770-kube-api-access-48qkj\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.025496 master-2 kubenswrapper[4776]: I1011 10:52:58.025470 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/8d885df4-18cc-401a-8226-cd3d17b3f770-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.126622 master-2 kubenswrapper[4776]: I1011 10:52:58.126568 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/8d885df4-18cc-401a-8226-cd3d17b3f770-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.127229 master-2 kubenswrapper[4776]: I1011 10:52:58.126657 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8d885df4-18cc-401a-8226-cd3d17b3f770-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.127229 master-2 kubenswrapper[4776]: I1011 10:52:58.126695 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8d885df4-18cc-401a-8226-cd3d17b3f770-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.127229 master-2 kubenswrapper[4776]: I1011 10:52:58.126716 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8d885df4-18cc-401a-8226-cd3d17b3f770-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.127229 master-2 kubenswrapper[4776]: I1011 10:52:58.126780 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8d885df4-18cc-401a-8226-cd3d17b3f770-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.127229 master-2 kubenswrapper[4776]: I1011 10:52:58.126813 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48qkj\" (UniqueName: \"kubernetes.io/projected/8d885df4-18cc-401a-8226-cd3d17b3f770-kube-api-access-48qkj\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.127608 master-2 kubenswrapper[4776]: I1011 10:52:58.127575 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/8d885df4-18cc-401a-8226-cd3d17b3f770-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.130451 master-2 kubenswrapper[4776]: I1011 10:52:58.130418 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8d885df4-18cc-401a-8226-cd3d17b3f770-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.131074 master-2 kubenswrapper[4776]: I1011 10:52:58.131034 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8d885df4-18cc-401a-8226-cd3d17b3f770-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.131997 master-2 kubenswrapper[4776]: I1011 10:52:58.131930 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8d885df4-18cc-401a-8226-cd3d17b3f770-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.134214 master-2 kubenswrapper[4776]: I1011 10:52:58.134153 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8d885df4-18cc-401a-8226-cd3d17b3f770-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.165803 master-2 kubenswrapper[4776]: I1011 10:52:58.165755 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48qkj\" (UniqueName: \"kubernetes.io/projected/8d885df4-18cc-401a-8226-cd3d17b3f770-kube-api-access-48qkj\") pod \"alertmanager-metric-storage-0\" (UID: \"8d885df4-18cc-401a-8226-cd3d17b3f770\") " pod="openstack/alertmanager-metric-storage-0" Oct 11 10:52:58.234140 master-2 kubenswrapper[4776]: I1011 10:52:58.234092 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Oct 11 10:53:00.876004 master-2 kubenswrapper[4776]: I1011 10:53:00.873983 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Oct 11 10:53:00.876004 master-2 kubenswrapper[4776]: I1011 10:53:00.875107 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.878073 master-2 kubenswrapper[4776]: I1011 10:53:00.878034 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 11 10:53:00.880739 master-2 kubenswrapper[4776]: I1011 10:53:00.878323 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 11 10:53:00.880739 master-2 kubenswrapper[4776]: I1011 10:53:00.878796 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 11 10:53:00.880739 master-2 kubenswrapper[4776]: I1011 10:53:00.879885 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 11 10:53:00.880739 master-2 kubenswrapper[4776]: I1011 10:53:00.880047 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 11 10:53:00.880739 master-2 kubenswrapper[4776]: I1011 10:53:00.880474 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 11 10:53:00.883690 master-2 kubenswrapper[4776]: I1011 10:53:00.883646 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Oct 11 10:53:00.971928 master-2 kubenswrapper[4776]: I1011 10:53:00.971861 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.971928 master-2 kubenswrapper[4776]: I1011 10:53:00.971923 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.971960 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972040 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkqfd\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-kube-api-access-lkqfd\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972063 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a4048e9-376b-49f0-a75f-a9d480ba8c96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b284115-4926-4f52-9901-1ca5f504b0f5\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972091 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972116 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972129 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972150 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972191 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:00.972201 master-2 kubenswrapper[4776]: I1011 10:53:00.972211 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-config-data\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.073868 master-2 kubenswrapper[4776]: I1011 10:53:01.073803 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.074073 master-2 kubenswrapper[4776]: I1011 10:53:01.073876 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.074073 master-2 kubenswrapper[4776]: I1011 10:53:01.073903 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.074073 master-2 kubenswrapper[4776]: I1011 10:53:01.073934 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.074073 master-2 kubenswrapper[4776]: I1011 10:53:01.074049 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.074266 master-2 kubenswrapper[4776]: I1011 10:53:01.074206 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-config-data\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.074266 master-2 kubenswrapper[4776]: I1011 10:53:01.074245 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.074719 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.074780 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.074826 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkqfd\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-kube-api-access-lkqfd\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.074858 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a4048e9-376b-49f0-a75f-a9d480ba8c96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b284115-4926-4f52-9901-1ca5f504b0f5\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.074858 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.074872 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.075804 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-config-data\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.076716 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.077437 master-2 kubenswrapper[4776]: I1011 10:53:01.077229 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.078514 master-2 kubenswrapper[4776]: I1011 10:53:01.078489 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.080713 master-2 kubenswrapper[4776]: I1011 10:53:01.080339 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:01.080713 master-2 kubenswrapper[4776]: I1011 10:53:01.080430 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a4048e9-376b-49f0-a75f-a9d480ba8c96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b284115-4926-4f52-9901-1ca5f504b0f5\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/5078d70166a213ba8ee51375445357cfc2acb4996c86c167f6517a7246ad420b/globalmount\"" pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.080713 master-2 kubenswrapper[4776]: I1011 10:53:01.080638 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.089195 master-2 kubenswrapper[4776]: I1011 10:53:01.089091 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.095081 master-2 kubenswrapper[4776]: I1011 10:53:01.095020 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.137816 master-2 kubenswrapper[4776]: I1011 10:53:01.128119 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkqfd\" (UniqueName: \"kubernetes.io/projected/914ac6d0-5a85-4b2d-b4d4-202def09b0d8-kube-api-access-lkqfd\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:01.137816 master-2 kubenswrapper[4776]: I1011 10:53:01.135936 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8qhqm"] Oct 11 10:53:01.149699 master-2 kubenswrapper[4776]: I1011 10:53:01.147897 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-4m6km"] Oct 11 10:53:01.149699 master-2 kubenswrapper[4776]: I1011 10:53:01.148384 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.153741 master-2 kubenswrapper[4776]: I1011 10:53:01.150411 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.155704 master-2 kubenswrapper[4776]: I1011 10:53:01.155482 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 11 10:53:01.155805 master-2 kubenswrapper[4776]: I1011 10:53:01.155750 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 11 10:53:01.174738 master-2 kubenswrapper[4776]: I1011 10:53:01.165371 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8qhqm"] Oct 11 10:53:01.174738 master-2 kubenswrapper[4776]: I1011 10:53:01.165903 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.176636 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065373ca-8c0f-489c-a72e-4d1aee1263ba-combined-ca-bundle\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.181914 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/065373ca-8c0f-489c-a72e-4d1aee1263ba-scripts\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182032 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-log-ovn\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182076 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbk4s\" (UniqueName: \"kubernetes.io/projected/894f72d0-cdc8-4904-b8a4-0e808ce0b855-kube-api-access-xbk4s\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182243 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-etc-ovs\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182277 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-log\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182305 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-run\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182329 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-lib\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.182711 master-2 kubenswrapper[4776]: I1011 10:53:01.182479 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4m6km"] Oct 11 10:53:01.195053 master-2 kubenswrapper[4776]: I1011 10:53:01.195007 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/894f72d0-cdc8-4904-b8a4-0e808ce0b855-scripts\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.195289 master-2 kubenswrapper[4776]: I1011 10:53:01.195143 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-run-ovn\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.195289 master-2 kubenswrapper[4776]: I1011 10:53:01.195190 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-run\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.195289 master-2 kubenswrapper[4776]: I1011 10:53:01.195218 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/065373ca-8c0f-489c-a72e-4d1aee1263ba-ovn-controller-tls-certs\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.195289 master-2 kubenswrapper[4776]: I1011 10:53:01.195281 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7qq6\" (UniqueName: \"kubernetes.io/projected/065373ca-8c0f-489c-a72e-4d1aee1263ba-kube-api-access-s7qq6\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298062 master-2 kubenswrapper[4776]: I1011 10:53:01.297987 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-etc-ovs\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.298062 master-2 kubenswrapper[4776]: I1011 10:53:01.298050 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-log\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298071 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-lib\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298103 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-run\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298121 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/894f72d0-cdc8-4904-b8a4-0e808ce0b855-scripts\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298175 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-run-ovn\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298200 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/065373ca-8c0f-489c-a72e-4d1aee1263ba-ovn-controller-tls-certs\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298221 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-run\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298268 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7qq6\" (UniqueName: \"kubernetes.io/projected/065373ca-8c0f-489c-a72e-4d1aee1263ba-kube-api-access-s7qq6\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298298 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065373ca-8c0f-489c-a72e-4d1aee1263ba-combined-ca-bundle\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298329 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/065373ca-8c0f-489c-a72e-4d1aee1263ba-scripts\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298361 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-log-ovn\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.298491 master-2 kubenswrapper[4776]: I1011 10:53:01.298383 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbk4s\" (UniqueName: \"kubernetes.io/projected/894f72d0-cdc8-4904-b8a4-0e808ce0b855-kube-api-access-xbk4s\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.300150 master-2 kubenswrapper[4776]: I1011 10:53:01.299626 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-etc-ovs\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.300150 master-2 kubenswrapper[4776]: I1011 10:53:01.299989 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-log\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.300391 master-2 kubenswrapper[4776]: I1011 10:53:01.300188 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-run\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.300477 master-2 kubenswrapper[4776]: I1011 10:53:01.300433 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-log-ovn\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.300963 master-2 kubenswrapper[4776]: I1011 10:53:01.300891 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-run\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.301164 master-2 kubenswrapper[4776]: I1011 10:53:01.301093 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/894f72d0-cdc8-4904-b8a4-0e808ce0b855-var-lib\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.301270 master-2 kubenswrapper[4776]: I1011 10:53:01.301255 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/065373ca-8c0f-489c-a72e-4d1aee1263ba-var-run-ovn\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.304626 master-2 kubenswrapper[4776]: I1011 10:53:01.303979 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/065373ca-8c0f-489c-a72e-4d1aee1263ba-ovn-controller-tls-certs\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.305335 master-2 kubenswrapper[4776]: I1011 10:53:01.305241 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/065373ca-8c0f-489c-a72e-4d1aee1263ba-scripts\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.306142 master-2 kubenswrapper[4776]: I1011 10:53:01.306049 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065373ca-8c0f-489c-a72e-4d1aee1263ba-combined-ca-bundle\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.306697 master-2 kubenswrapper[4776]: I1011 10:53:01.306639 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/894f72d0-cdc8-4904-b8a4-0e808ce0b855-scripts\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.319514 master-2 kubenswrapper[4776]: I1011 10:53:01.319465 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbk4s\" (UniqueName: \"kubernetes.io/projected/894f72d0-cdc8-4904-b8a4-0e808ce0b855-kube-api-access-xbk4s\") pod \"ovn-controller-ovs-4m6km\" (UID: \"894f72d0-cdc8-4904-b8a4-0e808ce0b855\") " pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:01.323114 master-2 kubenswrapper[4776]: I1011 10:53:01.323073 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7qq6\" (UniqueName: \"kubernetes.io/projected/065373ca-8c0f-489c-a72e-4d1aee1263ba-kube-api-access-s7qq6\") pod \"ovn-controller-8qhqm\" (UID: \"065373ca-8c0f-489c-a72e-4d1aee1263ba\") " pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.498237 master-2 kubenswrapper[4776]: I1011 10:53:01.498174 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:01.511570 master-2 kubenswrapper[4776]: I1011 10:53:01.511519 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:02.243109 master-2 kubenswrapper[4776]: I1011 10:53:02.243046 4776 generic.go:334] "Generic (PLEG): container finished" podID="977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" containerID="d18dd7e7c04452149778ab81532866efd7ae9784ef1c20a46c1951260c713b9f" exitCode=0 Oct 11 10:53:02.243109 master-2 kubenswrapper[4776]: I1011 10:53:02.243107 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" event={"ID":"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f","Type":"ContainerDied","Data":"d18dd7e7c04452149778ab81532866efd7ae9784ef1c20a46c1951260c713b9f"} Oct 11 10:53:02.457898 master-2 kubenswrapper[4776]: I1011 10:53:02.445159 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 11 10:53:02.474701 master-2 kubenswrapper[4776]: I1011 10:53:02.470108 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-rbc2v"] Oct 11 10:53:02.640842 master-2 kubenswrapper[4776]: I1011 10:53:02.640783 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a4048e9-376b-49f0-a75f-a9d480ba8c96\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b284115-4926-4f52-9901-1ca5f504b0f5\") pod \"rabbitmq-server-2\" (UID: \"914ac6d0-5a85-4b2d-b4d4-202def09b0d8\") " pod="openstack/rabbitmq-server-2" Oct 11 10:53:02.704712 master-2 kubenswrapper[4776]: I1011 10:53:02.704656 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8qhqm"] Oct 11 10:53:02.706659 master-2 kubenswrapper[4776]: W1011 10:53:02.706549 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod065373ca_8c0f_489c_a72e_4d1aee1263ba.slice/crio-e6d108dd528e0cd5a6ce7bcd2b922cb955856026c53b06005f1ffc09c8b171fe WatchSource:0}: Error finding container e6d108dd528e0cd5a6ce7bcd2b922cb955856026c53b06005f1ffc09c8b171fe: Status 404 returned error can't find the container with id e6d108dd528e0cd5a6ce7bcd2b922cb955856026c53b06005f1ffc09c8b171fe Oct 11 10:53:02.712931 master-2 kubenswrapper[4776]: I1011 10:53:02.712885 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:53:02.731746 master-2 kubenswrapper[4776]: I1011 10:53:02.731595 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Oct 11 10:53:02.890834 master-2 kubenswrapper[4776]: I1011 10:53:02.890601 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-kcjm9"] Oct 11 10:53:02.891879 master-2 kubenswrapper[4776]: I1011 10:53:02.891842 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.899371 master-2 kubenswrapper[4776]: I1011 10:53:02.897568 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 11 10:53:02.899371 master-2 kubenswrapper[4776]: I1011 10:53:02.897771 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 11 10:53:02.916324 master-2 kubenswrapper[4776]: I1011 10:53:02.912514 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kcjm9"] Oct 11 10:53:02.959120 master-2 kubenswrapper[4776]: I1011 10:53:02.958271 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebedfef5-9861-41cd-a97e-c59ff798091b-config\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.959120 master-2 kubenswrapper[4776]: I1011 10:53:02.958327 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ebedfef5-9861-41cd-a97e-c59ff798091b-ovs-rundir\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.959120 master-2 kubenswrapper[4776]: I1011 10:53:02.958347 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebedfef5-9861-41cd-a97e-c59ff798091b-combined-ca-bundle\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.959120 master-2 kubenswrapper[4776]: I1011 10:53:02.958399 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebedfef5-9861-41cd-a97e-c59ff798091b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.959120 master-2 kubenswrapper[4776]: I1011 10:53:02.958424 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ebedfef5-9861-41cd-a97e-c59ff798091b-ovn-rundir\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:02.959120 master-2 kubenswrapper[4776]: I1011 10:53:02.958444 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cpnt\" (UniqueName: \"kubernetes.io/projected/ebedfef5-9861-41cd-a97e-c59ff798091b-kube-api-access-2cpnt\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.071666 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebedfef5-9861-41cd-a97e-c59ff798091b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.071748 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ebedfef5-9861-41cd-a97e-c59ff798091b-ovn-rundir\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.071775 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cpnt\" (UniqueName: \"kubernetes.io/projected/ebedfef5-9861-41cd-a97e-c59ff798091b-kube-api-access-2cpnt\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.071910 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebedfef5-9861-41cd-a97e-c59ff798091b-config\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.071970 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ebedfef5-9861-41cd-a97e-c59ff798091b-ovs-rundir\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.071989 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebedfef5-9861-41cd-a97e-c59ff798091b-combined-ca-bundle\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.072601 master-2 kubenswrapper[4776]: I1011 10:53:03.072218 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ebedfef5-9861-41cd-a97e-c59ff798091b-ovn-rundir\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.073954 master-2 kubenswrapper[4776]: I1011 10:53:03.073861 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ebedfef5-9861-41cd-a97e-c59ff798091b-ovs-rundir\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.076542 master-2 kubenswrapper[4776]: I1011 10:53:03.076452 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebedfef5-9861-41cd-a97e-c59ff798091b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.077318 master-2 kubenswrapper[4776]: I1011 10:53:03.077277 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebedfef5-9861-41cd-a97e-c59ff798091b-config\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.078101 master-2 kubenswrapper[4776]: I1011 10:53:03.078005 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebedfef5-9861-41cd-a97e-c59ff798091b-combined-ca-bundle\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.102296 master-2 kubenswrapper[4776]: I1011 10:53:03.102248 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:53:03.104658 master-2 kubenswrapper[4776]: I1011 10:53:03.104606 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cpnt\" (UniqueName: \"kubernetes.io/projected/ebedfef5-9861-41cd-a97e-c59ff798091b-kube-api-access-2cpnt\") pod \"ovn-controller-metrics-kcjm9\" (UID: \"ebedfef5-9861-41cd-a97e-c59ff798091b\") " pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.175496 master-2 kubenswrapper[4776]: I1011 10:53:03.173395 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-config\") pod \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " Oct 11 10:53:03.175496 master-2 kubenswrapper[4776]: I1011 10:53:03.173650 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htpgt\" (UniqueName: \"kubernetes.io/projected/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-kube-api-access-htpgt\") pod \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\" (UID: \"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f\") " Oct 11 10:53:03.208809 master-2 kubenswrapper[4776]: I1011 10:53:03.194012 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-config" (OuterVolumeSpecName: "config") pod "977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" (UID: "977f2c8c-eb07-4fb7-ae7e-6d0688c6081f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:03.208809 master-2 kubenswrapper[4776]: I1011 10:53:03.196830 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-kube-api-access-htpgt" (OuterVolumeSpecName: "kube-api-access-htpgt") pod "977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" (UID: "977f2c8c-eb07-4fb7-ae7e-6d0688c6081f"). InnerVolumeSpecName "kube-api-access-htpgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:03.218739 master-2 kubenswrapper[4776]: I1011 10:53:03.218601 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kcjm9" Oct 11 10:53:03.262319 master-2 kubenswrapper[4776]: I1011 10:53:03.262247 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8d885df4-18cc-401a-8226-cd3d17b3f770","Type":"ContainerStarted","Data":"c5722ea7768634ccef469501bce467b2c6a8e3ecf85e25ba38414154e4e1054f"} Oct 11 10:53:03.264568 master-2 kubenswrapper[4776]: I1011 10:53:03.264404 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" event={"ID":"977f2c8c-eb07-4fb7-ae7e-6d0688c6081f","Type":"ContainerDied","Data":"bcff671aa0831c6673eae62f2a6c1c6fa0565bd88455b6fc9735c4678ae0771e"} Oct 11 10:53:03.264652 master-2 kubenswrapper[4776]: I1011 10:53:03.264574 4776 scope.go:117] "RemoveContainer" containerID="d18dd7e7c04452149778ab81532866efd7ae9784ef1c20a46c1951260c713b9f" Oct 11 10:53:03.264736 master-2 kubenswrapper[4776]: I1011 10:53:03.264719 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd846fcd9-l5ldp" Oct 11 10:53:03.275738 master-2 kubenswrapper[4776]: I1011 10:53:03.273555 4776 generic.go:334] "Generic (PLEG): container finished" podID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerID="c0ec6bd81c8aa0ea34befa100e7e8df08ded596440992a0b1a3ffb750f413afb" exitCode=0 Oct 11 10:53:03.275738 master-2 kubenswrapper[4776]: I1011 10:53:03.273662 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" event={"ID":"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5","Type":"ContainerDied","Data":"c0ec6bd81c8aa0ea34befa100e7e8df08ded596440992a0b1a3ffb750f413afb"} Oct 11 10:53:03.275738 master-2 kubenswrapper[4776]: I1011 10:53:03.273715 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" event={"ID":"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5","Type":"ContainerStarted","Data":"f2516a6982d29372b482ef8dfc55f32264f7df4b11429f9906ad40bd48d5344a"} Oct 11 10:53:03.280713 master-2 kubenswrapper[4776]: I1011 10:53:03.280683 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htpgt\" (UniqueName: \"kubernetes.io/projected/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-kube-api-access-htpgt\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:03.280793 master-2 kubenswrapper[4776]: I1011 10:53:03.280716 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:03.283034 master-2 kubenswrapper[4776]: I1011 10:53:03.282727 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm" event={"ID":"065373ca-8c0f-489c-a72e-4d1aee1263ba","Type":"ContainerStarted","Data":"e6d108dd528e0cd5a6ce7bcd2b922cb955856026c53b06005f1ffc09c8b171fe"} Oct 11 10:53:03.284357 master-2 kubenswrapper[4776]: I1011 10:53:03.284290 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbfedacb-2045-4297-be8f-3582dd2bcd7b","Type":"ContainerStarted","Data":"b03215961b9c5150bde0b4e7e2b48a359893ef2938e93f7fad3388b4aeef63a0"} Oct 11 10:53:03.332124 master-2 kubenswrapper[4776]: I1011 10:53:03.332089 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Oct 11 10:53:03.344082 master-2 kubenswrapper[4776]: W1011 10:53:03.343498 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod914ac6d0_5a85_4b2d_b4d4_202def09b0d8.slice/crio-366dcf107e1a9b7c922316fe84f7467f4c51d97c95f597e79b9f42a1c01e06d8 WatchSource:0}: Error finding container 366dcf107e1a9b7c922316fe84f7467f4c51d97c95f597e79b9f42a1c01e06d8: Status 404 returned error can't find the container with id 366dcf107e1a9b7c922316fe84f7467f4c51d97c95f597e79b9f42a1c01e06d8 Oct 11 10:53:03.391906 master-2 kubenswrapper[4776]: W1011 10:53:03.391853 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894f72d0_cdc8_4904_b8a4_0e808ce0b855.slice/crio-37de6888943e38243a5011cb787ae7bd9b5c53a123894b556732a4933a52457d WatchSource:0}: Error finding container 37de6888943e38243a5011cb787ae7bd9b5c53a123894b556732a4933a52457d: Status 404 returned error can't find the container with id 37de6888943e38243a5011cb787ae7bd9b5c53a123894b556732a4933a52457d Oct 11 10:53:03.454724 master-2 kubenswrapper[4776]: I1011 10:53:03.454634 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-l5ldp"] Oct 11 10:53:03.460384 master-2 kubenswrapper[4776]: I1011 10:53:03.460251 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd846fcd9-l5ldp"] Oct 11 10:53:03.481049 master-2 kubenswrapper[4776]: I1011 10:53:03.466662 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4m6km"] Oct 11 10:53:03.793595 master-2 kubenswrapper[4776]: I1011 10:53:03.793517 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kcjm9"] Oct 11 10:53:03.837874 master-2 kubenswrapper[4776]: W1011 10:53:03.837804 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebedfef5_9861_41cd_a97e_c59ff798091b.slice/crio-300ffb022d5bded4d28f3ad72b64861cfb74455a2a9db59fbf18eeb7fea6cde8 WatchSource:0}: Error finding container 300ffb022d5bded4d28f3ad72b64861cfb74455a2a9db59fbf18eeb7fea6cde8: Status 404 returned error can't find the container with id 300ffb022d5bded4d28f3ad72b64861cfb74455a2a9db59fbf18eeb7fea6cde8 Oct 11 10:53:04.072857 master-2 kubenswrapper[4776]: I1011 10:53:04.072642 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" path="/var/lib/kubelet/pods/977f2c8c-eb07-4fb7-ae7e-6d0688c6081f/volumes" Oct 11 10:53:04.317576 master-2 kubenswrapper[4776]: I1011 10:53:04.316799 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" event={"ID":"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5","Type":"ContainerStarted","Data":"21c4e7fcdc124283dbaf4b8ea241a8c8865c7c94e13f81aee9cfb5aab43aac68"} Oct 11 10:53:04.317576 master-2 kubenswrapper[4776]: I1011 10:53:04.316879 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:53:04.319642 master-2 kubenswrapper[4776]: I1011 10:53:04.319599 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"914ac6d0-5a85-4b2d-b4d4-202def09b0d8","Type":"ContainerStarted","Data":"366dcf107e1a9b7c922316fe84f7467f4c51d97c95f597e79b9f42a1c01e06d8"} Oct 11 10:53:04.322584 master-2 kubenswrapper[4776]: I1011 10:53:04.322485 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4m6km" event={"ID":"894f72d0-cdc8-4904-b8a4-0e808ce0b855","Type":"ContainerStarted","Data":"37de6888943e38243a5011cb787ae7bd9b5c53a123894b556732a4933a52457d"} Oct 11 10:53:04.325218 master-2 kubenswrapper[4776]: I1011 10:53:04.324891 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kcjm9" event={"ID":"ebedfef5-9861-41cd-a97e-c59ff798091b","Type":"ContainerStarted","Data":"300ffb022d5bded4d28f3ad72b64861cfb74455a2a9db59fbf18eeb7fea6cde8"} Oct 11 10:53:04.347151 master-2 kubenswrapper[4776]: I1011 10:53:04.346503 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-2"] Oct 11 10:53:04.347151 master-2 kubenswrapper[4776]: E1011 10:53:04.346869 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" containerName="init" Oct 11 10:53:04.347151 master-2 kubenswrapper[4776]: I1011 10:53:04.346883 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" containerName="init" Oct 11 10:53:04.347151 master-2 kubenswrapper[4776]: I1011 10:53:04.347021 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="977f2c8c-eb07-4fb7-ae7e-6d0688c6081f" containerName="init" Oct 11 10:53:04.355322 master-2 kubenswrapper[4776]: I1011 10:53:04.355243 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" podStartSLOduration=13.355222144 podStartE2EDuration="13.355222144s" podCreationTimestamp="2025-10-11 10:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:04.352022908 +0000 UTC m=+1619.136449627" watchObservedRunningTime="2025-10-11 10:53:04.355222144 +0000 UTC m=+1619.139648853" Oct 11 10:53:04.361016 master-2 kubenswrapper[4776]: I1011 10:53:04.360972 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.363919 master-2 kubenswrapper[4776]: I1011 10:53:04.363874 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 11 10:53:04.364127 master-2 kubenswrapper[4776]: I1011 10:53:04.364097 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 11 10:53:04.364200 master-2 kubenswrapper[4776]: I1011 10:53:04.364161 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 11 10:53:04.364295 master-2 kubenswrapper[4776]: I1011 10:53:04.364257 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 11 10:53:04.364367 master-2 kubenswrapper[4776]: I1011 10:53:04.364339 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 11 10:53:04.364544 master-2 kubenswrapper[4776]: I1011 10:53:04.364507 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 11 10:53:04.372766 master-2 kubenswrapper[4776]: I1011 10:53:04.369392 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-2"] Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509278 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509337 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7a37e421-023d-428b-918c-6aa5e4cec760\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a34ed919-8cce-4af6-9f47-66263cc58dfa\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509431 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509449 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-server-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509486 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-pod-info\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509506 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509537 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-plugins-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509727 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509849 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk48s\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-kube-api-access-hk48s\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509865 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.511720 master-2 kubenswrapper[4776]: I1011 10:53:04.509901 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-config-data\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612575 master-2 kubenswrapper[4776]: I1011 10:53:04.612529 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-server-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612710 master-2 kubenswrapper[4776]: I1011 10:53:04.612579 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612710 master-2 kubenswrapper[4776]: I1011 10:53:04.612601 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-pod-info\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612710 master-2 kubenswrapper[4776]: I1011 10:53:04.612621 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612710 master-2 kubenswrapper[4776]: I1011 10:53:04.612652 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-plugins-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612710 master-2 kubenswrapper[4776]: I1011 10:53:04.612667 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612710 master-2 kubenswrapper[4776]: I1011 10:53:04.612707 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk48s\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-kube-api-access-hk48s\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612886 master-2 kubenswrapper[4776]: I1011 10:53:04.612723 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612886 master-2 kubenswrapper[4776]: I1011 10:53:04.612745 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-config-data\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612886 master-2 kubenswrapper[4776]: I1011 10:53:04.612779 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.612886 master-2 kubenswrapper[4776]: I1011 10:53:04.612800 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7a37e421-023d-428b-918c-6aa5e4cec760\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a34ed919-8cce-4af6-9f47-66263cc58dfa\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.613899 master-2 kubenswrapper[4776]: I1011 10:53:04.613873 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-plugins-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.614786 master-2 kubenswrapper[4776]: I1011 10:53:04.614763 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.618634 master-2 kubenswrapper[4776]: I1011 10:53:04.615326 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-config-data\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.618634 master-2 kubenswrapper[4776]: I1011 10:53:04.615770 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.618634 master-2 kubenswrapper[4776]: I1011 10:53:04.616401 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-server-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.624444 master-2 kubenswrapper[4776]: I1011 10:53:04.624387 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-pod-info\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.624753 master-2 kubenswrapper[4776]: I1011 10:53:04.624710 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.624868 master-2 kubenswrapper[4776]: I1011 10:53:04.624841 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.624906 master-2 kubenswrapper[4776]: I1011 10:53:04.624841 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.625922 master-2 kubenswrapper[4776]: I1011 10:53:04.625860 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:04.626042 master-2 kubenswrapper[4776]: I1011 10:53:04.625950 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7a37e421-023d-428b-918c-6aa5e4cec760\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a34ed919-8cce-4af6-9f47-66263cc58dfa\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a8b6e9479ea03f681dc5123c419d7b32975608d13dab5d7e5a7d7a4095c8fa00/globalmount\"" pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:04.641448 master-2 kubenswrapper[4776]: I1011 10:53:04.641409 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk48s\" (UniqueName: \"kubernetes.io/projected/5a8ba065-7ef6-4bab-b20a-3bb274c93fa0-kube-api-access-hk48s\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:05.992673 master-2 kubenswrapper[4776]: I1011 10:53:05.992584 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7a37e421-023d-428b-918c-6aa5e4cec760\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a34ed919-8cce-4af6-9f47-66263cc58dfa\") pod \"rabbitmq-cell1-server-2\" (UID: \"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0\") " pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:06.521703 master-2 kubenswrapper[4776]: I1011 10:53:06.521607 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:09.408637 master-2 kubenswrapper[4776]: I1011 10:53:09.408563 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-2"] Oct 11 10:53:09.409730 master-2 kubenswrapper[4776]: I1011 10:53:09.409681 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-2" Oct 11 10:53:09.412541 master-2 kubenswrapper[4776]: I1011 10:53:09.412467 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 11 10:53:09.414051 master-2 kubenswrapper[4776]: I1011 10:53:09.414025 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 11 10:53:09.414384 master-2 kubenswrapper[4776]: I1011 10:53:09.414358 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 11 10:53:09.414629 master-2 kubenswrapper[4776]: I1011 10:53:09.414605 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 11 10:53:09.455771 master-2 kubenswrapper[4776]: I1011 10:53:09.455416 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-2"] Oct 11 10:53:09.692793 master-2 kubenswrapper[4776]: I1011 10:53:09.692735 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-kolla-config\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.692793 master-2 kubenswrapper[4776]: I1011 10:53:09.692799 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-operator-scripts\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.692827 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9fdda533-7818-4a6f-97c7-9229dafba44c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b09f7c95-55c7-406f-9f86-bdedf27fc584\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.692850 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpq5w\" (UniqueName: \"kubernetes.io/projected/72994ad3-2bca-4875-97f7-f98c00f64626-kube-api-access-tpq5w\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.692878 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-combined-ca-bundle\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.692915 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-secrets\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.692954 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/72994ad3-2bca-4875-97f7-f98c00f64626-config-data-generated\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.692986 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-galera-tls-certs\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.693047 master-2 kubenswrapper[4776]: I1011 10:53:09.693041 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-config-data-default\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794472 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-config-data-default\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794548 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-kolla-config\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794577 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-operator-scripts\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794604 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9fdda533-7818-4a6f-97c7-9229dafba44c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b09f7c95-55c7-406f-9f86-bdedf27fc584\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794625 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpq5w\" (UniqueName: \"kubernetes.io/projected/72994ad3-2bca-4875-97f7-f98c00f64626-kube-api-access-tpq5w\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794655 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-combined-ca-bundle\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794774 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-secrets\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794818 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/72994ad3-2bca-4875-97f7-f98c00f64626-config-data-generated\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.794847 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-galera-tls-certs\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.795479 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-config-data-default\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.796232 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/72994ad3-2bca-4875-97f7-f98c00f64626-config-data-generated\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.796248 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-kolla-config\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.795546 master-2 kubenswrapper[4776]: I1011 10:53:09.796550 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72994ad3-2bca-4875-97f7-f98c00f64626-operator-scripts\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.797231 master-2 kubenswrapper[4776]: I1011 10:53:09.796987 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:09.797231 master-2 kubenswrapper[4776]: I1011 10:53:09.797008 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9fdda533-7818-4a6f-97c7-9229dafba44c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b09f7c95-55c7-406f-9f86-bdedf27fc584\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/8dbd96f1521e66451c285a8c0c796fd9fb0ab49b3c5a56ee0fe73e1b546de5b3/globalmount\"" pod="openstack/openstack-galera-2" Oct 11 10:53:09.797940 master-2 kubenswrapper[4776]: I1011 10:53:09.797917 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-galera-tls-certs\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.798235 master-2 kubenswrapper[4776]: I1011 10:53:09.798211 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-secrets\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.799441 master-2 kubenswrapper[4776]: I1011 10:53:09.799397 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72994ad3-2bca-4875-97f7-f98c00f64626-combined-ca-bundle\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:09.854002 master-2 kubenswrapper[4776]: I1011 10:53:09.851896 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpq5w\" (UniqueName: \"kubernetes.io/projected/72994ad3-2bca-4875-97f7-f98c00f64626-kube-api-access-tpq5w\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:10.092926 master-2 kubenswrapper[4776]: I1011 10:53:10.092807 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-2"] Oct 11 10:53:11.175255 master-2 kubenswrapper[4776]: I1011 10:53:11.175202 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9fdda533-7818-4a6f-97c7-9229dafba44c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b09f7c95-55c7-406f-9f86-bdedf27fc584\") pod \"openstack-galera-2\" (UID: \"72994ad3-2bca-4875-97f7-f98c00f64626\") " pod="openstack/openstack-galera-2" Oct 11 10:53:11.212607 master-2 kubenswrapper[4776]: W1011 10:53:11.212541 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a8ba065_7ef6_4bab_b20a_3bb274c93fa0.slice/crio-b456290778239efbd3488a050ffc910f25294af1c27bf54359535d2e0a2c4ff0 WatchSource:0}: Error finding container b456290778239efbd3488a050ffc910f25294af1c27bf54359535d2e0a2c4ff0: Status 404 returned error can't find the container with id b456290778239efbd3488a050ffc910f25294af1c27bf54359535d2e0a2c4ff0 Oct 11 10:53:11.260760 master-2 kubenswrapper[4776]: I1011 10:53:11.260655 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-2" Oct 11 10:53:11.376787 master-2 kubenswrapper[4776]: I1011 10:53:11.376709 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-2" event={"ID":"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0","Type":"ContainerStarted","Data":"b456290778239efbd3488a050ffc910f25294af1c27bf54359535d2e0a2c4ff0"} Oct 11 10:53:11.472950 master-2 kubenswrapper[4776]: I1011 10:53:11.472873 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:53:12.011645 master-2 kubenswrapper[4776]: I1011 10:53:12.011516 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 11 10:53:12.012983 master-2 kubenswrapper[4776]: I1011 10:53:12.012951 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.016303 master-2 kubenswrapper[4776]: I1011 10:53:12.016256 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 11 10:53:12.016487 master-2 kubenswrapper[4776]: I1011 10:53:12.016452 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 11 10:53:12.016564 master-2 kubenswrapper[4776]: I1011 10:53:12.016542 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 11 10:53:12.071996 master-2 kubenswrapper[4776]: I1011 10:53:12.071940 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 11 10:53:12.205213 master-2 kubenswrapper[4776]: I1011 10:53:12.204921 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-2"] Oct 11 10:53:12.336105 master-2 kubenswrapper[4776]: I1011 10:53:12.336048 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336105 master-2 kubenswrapper[4776]: I1011 10:53:12.336103 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336343 master-2 kubenswrapper[4776]: I1011 10:53:12.336210 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336343 master-2 kubenswrapper[4776]: I1011 10:53:12.336233 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336343 master-2 kubenswrapper[4776]: I1011 10:53:12.336269 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n4s9\" (UniqueName: \"kubernetes.io/projected/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-kube-api-access-8n4s9\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336343 master-2 kubenswrapper[4776]: I1011 10:53:12.336303 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336343 master-2 kubenswrapper[4776]: I1011 10:53:12.336321 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336487 master-2 kubenswrapper[4776]: I1011 10:53:12.336346 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-057ac20b-a0d2-4376-998d-12784c232497\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0888aa49-e062-4b2f-84d8-747b79ae5873\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.336487 master-2 kubenswrapper[4776]: I1011 10:53:12.336450 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.394405 master-2 kubenswrapper[4776]: I1011 10:53:12.394368 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-2" event={"ID":"72994ad3-2bca-4875-97f7-f98c00f64626","Type":"ContainerStarted","Data":"c9e5ea80cfca9480a9e66559a3e097434f14a45d5494bb45edefe34f299f1163"} Oct 11 10:53:12.395838 master-2 kubenswrapper[4776]: I1011 10:53:12.395811 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm" event={"ID":"065373ca-8c0f-489c-a72e-4d1aee1263ba","Type":"ContainerStarted","Data":"2f405c7ed435c3f0da0862e4abd74ebd7954c57283604b84d8fd56ee58376df3"} Oct 11 10:53:12.396074 master-2 kubenswrapper[4776]: I1011 10:53:12.396043 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:12.398623 master-2 kubenswrapper[4776]: I1011 10:53:12.398376 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbfedacb-2045-4297-be8f-3582dd2bcd7b","Type":"ContainerStarted","Data":"6f7f88ff1f83ffde680d343beac013ec48c1906c3184c197f7e12a2ad3e3ad7b"} Oct 11 10:53:12.398623 master-2 kubenswrapper[4776]: I1011 10:53:12.398476 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 11 10:53:12.401420 master-2 kubenswrapper[4776]: I1011 10:53:12.401386 4776 generic.go:334] "Generic (PLEG): container finished" podID="894f72d0-cdc8-4904-b8a4-0e808ce0b855" containerID="a9b3fc0f39b790a2509ba4093c0b48a1cff8525f410d5e2ce8e8fd0c638fec0a" exitCode=0 Oct 11 10:53:12.401492 master-2 kubenswrapper[4776]: I1011 10:53:12.401454 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4m6km" event={"ID":"894f72d0-cdc8-4904-b8a4-0e808ce0b855","Type":"ContainerDied","Data":"a9b3fc0f39b790a2509ba4093c0b48a1cff8525f410d5e2ce8e8fd0c638fec0a"} Oct 11 10:53:12.405489 master-2 kubenswrapper[4776]: I1011 10:53:12.405456 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kcjm9" event={"ID":"ebedfef5-9861-41cd-a97e-c59ff798091b","Type":"ContainerStarted","Data":"548b2d960258bc09cc9405ae02a280ce9e01bfa702edff63d1f914a2612eb61f"} Oct 11 10:53:12.438227 master-2 kubenswrapper[4776]: I1011 10:53:12.438104 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438331 master-2 kubenswrapper[4776]: I1011 10:53:12.438232 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438331 master-2 kubenswrapper[4776]: I1011 10:53:12.438286 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438429 master-2 kubenswrapper[4776]: I1011 10:53:12.438386 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438429 master-2 kubenswrapper[4776]: I1011 10:53:12.438427 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438769 master-2 kubenswrapper[4776]: I1011 10:53:12.438463 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n4s9\" (UniqueName: \"kubernetes.io/projected/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-kube-api-access-8n4s9\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438769 master-2 kubenswrapper[4776]: I1011 10:53:12.438510 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438769 master-2 kubenswrapper[4776]: I1011 10:53:12.438529 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438769 master-2 kubenswrapper[4776]: I1011 10:53:12.438551 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-057ac20b-a0d2-4376-998d-12784c232497\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0888aa49-e062-4b2f-84d8-747b79ae5873\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.438769 master-2 kubenswrapper[4776]: I1011 10:53:12.438630 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.440141 master-2 kubenswrapper[4776]: I1011 10:53:12.439631 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.440141 master-2 kubenswrapper[4776]: I1011 10:53:12.440093 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.440736 master-2 kubenswrapper[4776]: I1011 10:53:12.440704 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.442113 master-2 kubenswrapper[4776]: I1011 10:53:12.441984 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:12.442189 master-2 kubenswrapper[4776]: I1011 10:53:12.442116 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-057ac20b-a0d2-4376-998d-12784c232497\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0888aa49-e062-4b2f-84d8-747b79ae5873\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/8b49609e0b8df1fc3e0f10240da586e838b76c0525e8f270f27010349d1c9159/globalmount\"" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.443553 master-2 kubenswrapper[4776]: I1011 10:53:12.443518 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.444344 master-2 kubenswrapper[4776]: I1011 10:53:12.444235 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.445259 master-2 kubenswrapper[4776]: I1011 10:53:12.445206 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.613656 master-2 kubenswrapper[4776]: I1011 10:53:12.613591 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n4s9\" (UniqueName: \"kubernetes.io/projected/a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9-kube-api-access-8n4s9\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:12.703345 master-2 kubenswrapper[4776]: I1011 10:53:12.703275 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8qhqm" podStartSLOduration=3.196463907 podStartE2EDuration="11.703253062s" podCreationTimestamp="2025-10-11 10:53:01 +0000 UTC" firstStartedPulling="2025-10-11 10:53:02.709795051 +0000 UTC m=+1617.494221760" lastFinishedPulling="2025-10-11 10:53:11.216584206 +0000 UTC m=+1626.001010915" observedRunningTime="2025-10-11 10:53:12.664622095 +0000 UTC m=+1627.449048804" watchObservedRunningTime="2025-10-11 10:53:12.703253062 +0000 UTC m=+1627.487679771" Oct 11 10:53:12.849898 master-2 kubenswrapper[4776]: I1011 10:53:12.849817 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.10464565 podStartE2EDuration="15.849799361s" podCreationTimestamp="2025-10-11 10:52:57 +0000 UTC" firstStartedPulling="2025-10-11 10:53:02.726222286 +0000 UTC m=+1617.510648995" lastFinishedPulling="2025-10-11 10:53:11.471375997 +0000 UTC m=+1626.255802706" observedRunningTime="2025-10-11 10:53:12.84460599 +0000 UTC m=+1627.629032699" watchObservedRunningTime="2025-10-11 10:53:12.849799361 +0000 UTC m=+1627.634226070" Oct 11 10:53:13.011715 master-2 kubenswrapper[4776]: I1011 10:53:13.011568 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-kcjm9" podStartSLOduration=3.361600166 podStartE2EDuration="11.011549942s" podCreationTimestamp="2025-10-11 10:53:02 +0000 UTC" firstStartedPulling="2025-10-11 10:53:03.839771718 +0000 UTC m=+1618.624198427" lastFinishedPulling="2025-10-11 10:53:11.489721494 +0000 UTC m=+1626.274148203" observedRunningTime="2025-10-11 10:53:12.997126581 +0000 UTC m=+1627.781553290" watchObservedRunningTime="2025-10-11 10:53:13.011549942 +0000 UTC m=+1627.795976651" Oct 11 10:53:13.416757 master-2 kubenswrapper[4776]: I1011 10:53:13.416694 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"914ac6d0-5a85-4b2d-b4d4-202def09b0d8","Type":"ContainerStarted","Data":"e2e689b6171c3c9555e3da0b9467cad86d6df71fe90a288a26e58488f8162ae3"} Oct 11 10:53:13.419472 master-2 kubenswrapper[4776]: I1011 10:53:13.419433 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4m6km" event={"ID":"894f72d0-cdc8-4904-b8a4-0e808ce0b855","Type":"ContainerStarted","Data":"9d2238cd76bcd430302dc7e823623c13022114d0eab4f21df80493efa4fd846b"} Oct 11 10:53:13.419472 master-2 kubenswrapper[4776]: I1011 10:53:13.419472 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4m6km" event={"ID":"894f72d0-cdc8-4904-b8a4-0e808ce0b855","Type":"ContainerStarted","Data":"53ee7d18ef164052bc930ceab96f754b7c6524fc3805bf0c6265c2a82527031f"} Oct 11 10:53:13.419635 master-2 kubenswrapper[4776]: I1011 10:53:13.419596 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:13.421920 master-2 kubenswrapper[4776]: I1011 10:53:13.421892 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-2" event={"ID":"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0","Type":"ContainerStarted","Data":"95de2747230b1e6ce345bd8b5619b82e79500b7d95b612f55c02d195c5ea9860"} Oct 11 10:53:13.533332 master-2 kubenswrapper[4776]: I1011 10:53:13.530963 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-4m6km" podStartSLOduration=6.4354767299999995 podStartE2EDuration="12.530945939s" podCreationTimestamp="2025-10-11 10:53:01 +0000 UTC" firstStartedPulling="2025-10-11 10:53:03.395081879 +0000 UTC m=+1618.179508588" lastFinishedPulling="2025-10-11 10:53:09.490551088 +0000 UTC m=+1624.274977797" observedRunningTime="2025-10-11 10:53:13.480866003 +0000 UTC m=+1628.265292712" watchObservedRunningTime="2025-10-11 10:53:13.530945939 +0000 UTC m=+1628.315372648" Oct 11 10:53:13.667309 master-2 kubenswrapper[4776]: I1011 10:53:13.667237 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-057ac20b-a0d2-4376-998d-12784c232497\" (UniqueName: \"kubernetes.io/csi/topolvm.io^0888aa49-e062-4b2f-84d8-747b79ae5873\") pod \"openstack-cell1-galera-0\" (UID: \"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9\") " pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:13.835366 master-2 kubenswrapper[4776]: I1011 10:53:13.835306 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:14.272605 master-2 kubenswrapper[4776]: I1011 10:53:14.272557 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 11 10:53:14.276764 master-2 kubenswrapper[4776]: W1011 10:53:14.276615 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda37aeaa8_8c12_4ab0_a4a6_89b3c92886d9.slice/crio-742f359ce2f15ab94aa45089d97142c58f7bc9946b63a84664086d6b121215ec WatchSource:0}: Error finding container 742f359ce2f15ab94aa45089d97142c58f7bc9946b63a84664086d6b121215ec: Status 404 returned error can't find the container with id 742f359ce2f15ab94aa45089d97142c58f7bc9946b63a84664086d6b121215ec Oct 11 10:53:14.436800 master-2 kubenswrapper[4776]: I1011 10:53:14.436721 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9","Type":"ContainerStarted","Data":"742f359ce2f15ab94aa45089d97142c58f7bc9946b63a84664086d6b121215ec"} Oct 11 10:53:14.440706 master-2 kubenswrapper[4776]: I1011 10:53:14.440612 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8d885df4-18cc-401a-8226-cd3d17b3f770","Type":"ContainerStarted","Data":"fb03c91ad7a42c396933f0f240b96a175ee42b923ebf15def67f14a383df0288"} Oct 11 10:53:14.441593 master-2 kubenswrapper[4776]: I1011 10:53:14.441552 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:15.605538 master-2 kubenswrapper[4776]: I1011 10:53:15.605474 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-1"] Oct 11 10:53:15.606512 master-2 kubenswrapper[4776]: I1011 10:53:15.606479 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-1" Oct 11 10:53:15.611751 master-2 kubenswrapper[4776]: I1011 10:53:15.611726 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 11 10:53:15.611869 master-2 kubenswrapper[4776]: I1011 10:53:15.611828 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 11 10:53:15.623116 master-2 kubenswrapper[4776]: I1011 10:53:15.623067 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-1"] Oct 11 10:53:15.801824 master-2 kubenswrapper[4776]: I1011 10:53:15.801769 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92bf399-88cc-4b7b-8048-81fda1a2e172-combined-ca-bundle\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.802211 master-2 kubenswrapper[4776]: I1011 10:53:15.801919 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f92bf399-88cc-4b7b-8048-81fda1a2e172-kolla-config\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.802211 master-2 kubenswrapper[4776]: I1011 10:53:15.802040 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92bf399-88cc-4b7b-8048-81fda1a2e172-memcached-tls-certs\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.802211 master-2 kubenswrapper[4776]: I1011 10:53:15.802111 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f92bf399-88cc-4b7b-8048-81fda1a2e172-config-data\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.802211 master-2 kubenswrapper[4776]: I1011 10:53:15.802172 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqk9w\" (UniqueName: \"kubernetes.io/projected/f92bf399-88cc-4b7b-8048-81fda1a2e172-kube-api-access-rqk9w\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.904239 master-2 kubenswrapper[4776]: I1011 10:53:15.904044 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92bf399-88cc-4b7b-8048-81fda1a2e172-combined-ca-bundle\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.904239 master-2 kubenswrapper[4776]: I1011 10:53:15.904101 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f92bf399-88cc-4b7b-8048-81fda1a2e172-kolla-config\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.904239 master-2 kubenswrapper[4776]: I1011 10:53:15.904139 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92bf399-88cc-4b7b-8048-81fda1a2e172-memcached-tls-certs\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.904239 master-2 kubenswrapper[4776]: I1011 10:53:15.904169 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f92bf399-88cc-4b7b-8048-81fda1a2e172-config-data\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.904239 master-2 kubenswrapper[4776]: I1011 10:53:15.904193 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqk9w\" (UniqueName: \"kubernetes.io/projected/f92bf399-88cc-4b7b-8048-81fda1a2e172-kube-api-access-rqk9w\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.905496 master-2 kubenswrapper[4776]: I1011 10:53:15.905446 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f92bf399-88cc-4b7b-8048-81fda1a2e172-config-data\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.905606 master-2 kubenswrapper[4776]: I1011 10:53:15.905575 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f92bf399-88cc-4b7b-8048-81fda1a2e172-kolla-config\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.907555 master-2 kubenswrapper[4776]: I1011 10:53:15.907521 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92bf399-88cc-4b7b-8048-81fda1a2e172-combined-ca-bundle\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.908921 master-2 kubenswrapper[4776]: I1011 10:53:15.908890 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f92bf399-88cc-4b7b-8048-81fda1a2e172-memcached-tls-certs\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.923901 master-2 kubenswrapper[4776]: I1011 10:53:15.923868 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqk9w\" (UniqueName: \"kubernetes.io/projected/f92bf399-88cc-4b7b-8048-81fda1a2e172-kube-api-access-rqk9w\") pod \"memcached-1\" (UID: \"f92bf399-88cc-4b7b-8048-81fda1a2e172\") " pod="openstack/memcached-1" Oct 11 10:53:15.929455 master-2 kubenswrapper[4776]: I1011 10:53:15.929317 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-1" Oct 11 10:53:16.900766 master-2 kubenswrapper[4776]: I1011 10:53:16.900718 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-1"] Oct 11 10:53:16.904686 master-2 kubenswrapper[4776]: W1011 10:53:16.904637 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf92bf399_88cc_4b7b_8048_81fda1a2e172.slice/crio-cc4040e788101c436998ccffe07ed7d3bc4b40a2a6cddb30658fd48a58139457 WatchSource:0}: Error finding container cc4040e788101c436998ccffe07ed7d3bc4b40a2a6cddb30658fd48a58139457: Status 404 returned error can't find the container with id cc4040e788101c436998ccffe07ed7d3bc4b40a2a6cddb30658fd48a58139457 Oct 11 10:53:17.464794 master-2 kubenswrapper[4776]: I1011 10:53:17.464734 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 11 10:53:17.476843 master-2 kubenswrapper[4776]: I1011 10:53:17.476746 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-1" event={"ID":"f92bf399-88cc-4b7b-8048-81fda1a2e172","Type":"ContainerStarted","Data":"cc4040e788101c436998ccffe07ed7d3bc4b40a2a6cddb30658fd48a58139457"} Oct 11 10:53:17.478474 master-2 kubenswrapper[4776]: I1011 10:53:17.478426 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-2" event={"ID":"72994ad3-2bca-4875-97f7-f98c00f64626","Type":"ContainerStarted","Data":"fbba402b233426c847ae83ebba46e37146dd642ee086952f7ff24eada01f5d5e"} Oct 11 10:53:17.480728 master-2 kubenswrapper[4776]: I1011 10:53:17.480231 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9","Type":"ContainerStarted","Data":"8d344e8337f6b6811ddbf71dee0be912d8e27d3cf52765f3b09cc5a456d0a2bf"} Oct 11 10:53:18.833457 master-2 kubenswrapper[4776]: I1011 10:53:18.833385 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 11 10:53:18.834612 master-2 kubenswrapper[4776]: I1011 10:53:18.834570 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.838023 master-2 kubenswrapper[4776]: I1011 10:53:18.837770 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 11 10:53:18.838023 master-2 kubenswrapper[4776]: I1011 10:53:18.837933 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 11 10:53:18.839477 master-2 kubenswrapper[4776]: I1011 10:53:18.839341 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 11 10:53:18.854453 master-2 kubenswrapper[4776]: I1011 10:53:18.854409 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 11 10:53:18.959911 master-2 kubenswrapper[4776]: I1011 10:53:18.959871 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960187 master-2 kubenswrapper[4776]: I1011 10:53:18.960172 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773e37f4-d372-40a8-936f-5b148ca7dabf-config\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960360 master-2 kubenswrapper[4776]: I1011 10:53:18.960348 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/773e37f4-d372-40a8-936f-5b148ca7dabf-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960488 master-2 kubenswrapper[4776]: I1011 10:53:18.960474 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-623a62ea-a36e-4deb-939a-2ce2044bc61a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^80f154a4-9ed7-483e-9641-598f7985618e\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960595 master-2 kubenswrapper[4776]: I1011 10:53:18.960583 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960752 master-2 kubenswrapper[4776]: I1011 10:53:18.960733 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960878 master-2 kubenswrapper[4776]: I1011 10:53:18.960864 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/773e37f4-d372-40a8-936f-5b148ca7dabf-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:18.960994 master-2 kubenswrapper[4776]: I1011 10:53:18.960981 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8lr\" (UniqueName: \"kubernetes.io/projected/773e37f4-d372-40a8-936f-5b148ca7dabf-kube-api-access-fw8lr\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.058210 master-2 kubenswrapper[4776]: I1011 10:53:19.058157 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 11 10:53:19.059715 master-2 kubenswrapper[4776]: I1011 10:53:19.059698 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062097 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062138 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773e37f4-d372-40a8-936f-5b148ca7dabf-config\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062165 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/773e37f4-d372-40a8-936f-5b148ca7dabf-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062194 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-623a62ea-a36e-4deb-939a-2ce2044bc61a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^80f154a4-9ed7-483e-9641-598f7985618e\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062210 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062248 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062277 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/773e37f4-d372-40a8-936f-5b148ca7dabf-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.062724 master-2 kubenswrapper[4776]: I1011 10:53:19.062294 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8lr\" (UniqueName: \"kubernetes.io/projected/773e37f4-d372-40a8-936f-5b148ca7dabf-kube-api-access-fw8lr\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.064042 master-2 kubenswrapper[4776]: I1011 10:53:19.064022 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773e37f4-d372-40a8-936f-5b148ca7dabf-config\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.064874 master-2 kubenswrapper[4776]: I1011 10:53:19.064857 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/773e37f4-d372-40a8-936f-5b148ca7dabf-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.066172 master-2 kubenswrapper[4776]: I1011 10:53:19.066127 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 11 10:53:19.066418 master-2 kubenswrapper[4776]: I1011 10:53:19.066392 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 11 10:53:19.066657 master-2 kubenswrapper[4776]: I1011 10:53:19.066635 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 11 10:53:19.067586 master-2 kubenswrapper[4776]: I1011 10:53:19.067561 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/773e37f4-d372-40a8-936f-5b148ca7dabf-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.068742 master-2 kubenswrapper[4776]: I1011 10:53:19.068500 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.072869 master-2 kubenswrapper[4776]: I1011 10:53:19.071475 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:19.072869 master-2 kubenswrapper[4776]: I1011 10:53:19.071531 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-623a62ea-a36e-4deb-939a-2ce2044bc61a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^80f154a4-9ed7-483e-9641-598f7985618e\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/dae1067f286c2d27a912f7c78728bf47135dfe55e1e4bb4669097781af956b57/globalmount\"" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.075994 master-2 kubenswrapper[4776]: I1011 10:53:19.075373 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 11 10:53:19.079714 master-2 kubenswrapper[4776]: I1011 10:53:19.079655 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.081475 master-2 kubenswrapper[4776]: I1011 10:53:19.081429 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/773e37f4-d372-40a8-936f-5b148ca7dabf-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.134190 master-2 kubenswrapper[4776]: I1011 10:53:19.133536 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8lr\" (UniqueName: \"kubernetes.io/projected/773e37f4-d372-40a8-936f-5b148ca7dabf-kube-api-access-fw8lr\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:19.163744 master-2 kubenswrapper[4776]: I1011 10:53:19.163589 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b7d3c19-b426-4026-94db-329447ffb1d5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.163744 master-2 kubenswrapper[4776]: I1011 10:53:19.163709 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.163962 master-2 kubenswrapper[4776]: I1011 10:53:19.163766 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b7d3c19-b426-4026-94db-329447ffb1d5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.163962 master-2 kubenswrapper[4776]: I1011 10:53:19.163820 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.163962 master-2 kubenswrapper[4776]: I1011 10:53:19.163838 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qgf4\" (UniqueName: \"kubernetes.io/projected/3b7d3c19-b426-4026-94db-329447ffb1d5-kube-api-access-7qgf4\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.163962 master-2 kubenswrapper[4776]: I1011 10:53:19.163880 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b7d3c19-b426-4026-94db-329447ffb1d5-config\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.163962 master-2 kubenswrapper[4776]: I1011 10:53:19.163940 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c914b16-9f88-4062-9dcc-b1e3200271a6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^be314f8f-443e-4a70-b706-bc6a52464ec7\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.164114 master-2 kubenswrapper[4776]: I1011 10:53:19.163998 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.265777 master-2 kubenswrapper[4776]: I1011 10:53:19.265717 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.265777 master-2 kubenswrapper[4776]: I1011 10:53:19.265766 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qgf4\" (UniqueName: \"kubernetes.io/projected/3b7d3c19-b426-4026-94db-329447ffb1d5-kube-api-access-7qgf4\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.266004 master-2 kubenswrapper[4776]: I1011 10:53:19.265835 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b7d3c19-b426-4026-94db-329447ffb1d5-config\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.266004 master-2 kubenswrapper[4776]: I1011 10:53:19.265866 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c914b16-9f88-4062-9dcc-b1e3200271a6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^be314f8f-443e-4a70-b706-bc6a52464ec7\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.266004 master-2 kubenswrapper[4776]: I1011 10:53:19.265895 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.266004 master-2 kubenswrapper[4776]: I1011 10:53:19.265926 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b7d3c19-b426-4026-94db-329447ffb1d5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.266004 master-2 kubenswrapper[4776]: I1011 10:53:19.265951 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.266004 master-2 kubenswrapper[4776]: I1011 10:53:19.265980 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b7d3c19-b426-4026-94db-329447ffb1d5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.267828 master-2 kubenswrapper[4776]: I1011 10:53:19.267617 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b7d3c19-b426-4026-94db-329447ffb1d5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.267828 master-2 kubenswrapper[4776]: I1011 10:53:19.267765 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b7d3c19-b426-4026-94db-329447ffb1d5-config\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.267975 master-2 kubenswrapper[4776]: I1011 10:53:19.267938 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b7d3c19-b426-4026-94db-329447ffb1d5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.268031 master-2 kubenswrapper[4776]: I1011 10:53:19.267977 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:53:19.268031 master-2 kubenswrapper[4776]: I1011 10:53:19.268002 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c914b16-9f88-4062-9dcc-b1e3200271a6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^be314f8f-443e-4a70-b706-bc6a52464ec7\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a6a45de1af4698cfd258e4b0cc4c9b6b0e6a932c19773f8cb77ec5494d801c93/globalmount\"" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.270136 master-2 kubenswrapper[4776]: I1011 10:53:19.270091 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.270263 master-2 kubenswrapper[4776]: I1011 10:53:19.270218 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.270592 master-2 kubenswrapper[4776]: I1011 10:53:19.270555 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7d3c19-b426-4026-94db-329447ffb1d5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.285222 master-2 kubenswrapper[4776]: I1011 10:53:19.285128 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qgf4\" (UniqueName: \"kubernetes.io/projected/3b7d3c19-b426-4026-94db-329447ffb1d5-kube-api-access-7qgf4\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:19.503954 master-2 kubenswrapper[4776]: I1011 10:53:19.503901 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-1" event={"ID":"f92bf399-88cc-4b7b-8048-81fda1a2e172","Type":"ContainerStarted","Data":"eb58a07c315c6bd8633e9756d3be3d693a76dd4977c59794237d670da2037df0"} Oct 11 10:53:19.504841 master-2 kubenswrapper[4776]: I1011 10:53:19.504814 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-1" Oct 11 10:53:20.512040 master-2 kubenswrapper[4776]: I1011 10:53:20.511995 4776 generic.go:334] "Generic (PLEG): container finished" podID="8d885df4-18cc-401a-8226-cd3d17b3f770" containerID="fb03c91ad7a42c396933f0f240b96a175ee42b923ebf15def67f14a383df0288" exitCode=0 Oct 11 10:53:20.512757 master-2 kubenswrapper[4776]: I1011 10:53:20.512729 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8d885df4-18cc-401a-8226-cd3d17b3f770","Type":"ContainerDied","Data":"fb03c91ad7a42c396933f0f240b96a175ee42b923ebf15def67f14a383df0288"} Oct 11 10:53:20.556801 master-2 kubenswrapper[4776]: I1011 10:53:20.556715 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-1" podStartSLOduration=3.690140503 podStartE2EDuration="5.556672817s" podCreationTimestamp="2025-10-11 10:53:15 +0000 UTC" firstStartedPulling="2025-10-11 10:53:16.907247464 +0000 UTC m=+1631.691674173" lastFinishedPulling="2025-10-11 10:53:18.773779778 +0000 UTC m=+1633.558206487" observedRunningTime="2025-10-11 10:53:19.53406699 +0000 UTC m=+1634.318493699" watchObservedRunningTime="2025-10-11 10:53:20.556672817 +0000 UTC m=+1635.341099526" Oct 11 10:53:20.590785 master-2 kubenswrapper[4776]: I1011 10:53:20.576192 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-623a62ea-a36e-4deb-939a-2ce2044bc61a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^80f154a4-9ed7-483e-9641-598f7985618e\") pod \"ovsdbserver-nb-2\" (UID: \"773e37f4-d372-40a8-936f-5b148ca7dabf\") " pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:20.777118 master-2 kubenswrapper[4776]: I1011 10:53:20.776983 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:21.303244 master-2 kubenswrapper[4776]: I1011 10:53:21.303193 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 11 10:53:21.307039 master-2 kubenswrapper[4776]: W1011 10:53:21.306995 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod773e37f4_d372_40a8_936f_5b148ca7dabf.slice/crio-71f294f185ce2b30983bfd4fed413d88a92bad22a24f92e5870c9c9f40b58587 WatchSource:0}: Error finding container 71f294f185ce2b30983bfd4fed413d88a92bad22a24f92e5870c9c9f40b58587: Status 404 returned error can't find the container with id 71f294f185ce2b30983bfd4fed413d88a92bad22a24f92e5870c9c9f40b58587 Oct 11 10:53:21.520027 master-2 kubenswrapper[4776]: I1011 10:53:21.519890 4776 generic.go:334] "Generic (PLEG): container finished" podID="72994ad3-2bca-4875-97f7-f98c00f64626" containerID="fbba402b233426c847ae83ebba46e37146dd642ee086952f7ff24eada01f5d5e" exitCode=0 Oct 11 10:53:21.520027 master-2 kubenswrapper[4776]: I1011 10:53:21.519971 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-2" event={"ID":"72994ad3-2bca-4875-97f7-f98c00f64626","Type":"ContainerDied","Data":"fbba402b233426c847ae83ebba46e37146dd642ee086952f7ff24eada01f5d5e"} Oct 11 10:53:21.523597 master-2 kubenswrapper[4776]: I1011 10:53:21.523555 4776 generic.go:334] "Generic (PLEG): container finished" podID="a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9" containerID="8d344e8337f6b6811ddbf71dee0be912d8e27d3cf52765f3b09cc5a456d0a2bf" exitCode=0 Oct 11 10:53:21.523703 master-2 kubenswrapper[4776]: I1011 10:53:21.523624 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9","Type":"ContainerDied","Data":"8d344e8337f6b6811ddbf71dee0be912d8e27d3cf52765f3b09cc5a456d0a2bf"} Oct 11 10:53:21.525861 master-2 kubenswrapper[4776]: I1011 10:53:21.525827 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"773e37f4-d372-40a8-936f-5b148ca7dabf","Type":"ContainerStarted","Data":"71f294f185ce2b30983bfd4fed413d88a92bad22a24f92e5870c9c9f40b58587"} Oct 11 10:53:21.884582 master-2 kubenswrapper[4776]: I1011 10:53:21.884540 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c914b16-9f88-4062-9dcc-b1e3200271a6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^be314f8f-443e-4a70-b706-bc6a52464ec7\") pod \"ovsdbserver-sb-0\" (UID: \"3b7d3c19-b426-4026-94db-329447ffb1d5\") " pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:22.170279 master-2 kubenswrapper[4776]: I1011 10:53:22.170228 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:22.535288 master-2 kubenswrapper[4776]: I1011 10:53:22.534844 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a37aeaa8-8c12-4ab0-a4a6-89b3c92886d9","Type":"ContainerStarted","Data":"77c5dc7e3fe0b716852a2a401ba9ab5958e35df34584b93d6320493441342b90"} Oct 11 10:53:22.536373 master-2 kubenswrapper[4776]: I1011 10:53:22.536328 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"773e37f4-d372-40a8-936f-5b148ca7dabf","Type":"ContainerStarted","Data":"062c9e2240151337e31541822816c848fa2cd72677bad1cdf81d23f0f7a3a352"} Oct 11 10:53:22.538262 master-2 kubenswrapper[4776]: I1011 10:53:22.538208 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-2" event={"ID":"72994ad3-2bca-4875-97f7-f98c00f64626","Type":"ContainerStarted","Data":"196e365f5d38da11085297740b0c9f63bd9bc4a7f8864c167ea8e54ddb465ba2"} Oct 11 10:53:22.566123 master-2 kubenswrapper[4776]: I1011 10:53:22.566025 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.32865161 podStartE2EDuration="29.566004858s" podCreationTimestamp="2025-10-11 10:52:53 +0000 UTC" firstStartedPulling="2025-10-11 10:53:14.279099422 +0000 UTC m=+1629.063526131" lastFinishedPulling="2025-10-11 10:53:16.51645268 +0000 UTC m=+1631.300879379" observedRunningTime="2025-10-11 10:53:22.561308822 +0000 UTC m=+1637.345735531" watchObservedRunningTime="2025-10-11 10:53:22.566004858 +0000 UTC m=+1637.350431567" Oct 11 10:53:22.595471 master-2 kubenswrapper[4776]: I1011 10:53:22.594005 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-2" podStartSLOduration=26.339087765 podStartE2EDuration="30.593986106s" podCreationTimestamp="2025-10-11 10:52:52 +0000 UTC" firstStartedPulling="2025-10-11 10:53:12.26789111 +0000 UTC m=+1627.052317819" lastFinishedPulling="2025-10-11 10:53:16.522789451 +0000 UTC m=+1631.307216160" observedRunningTime="2025-10-11 10:53:22.59116351 +0000 UTC m=+1637.375590219" watchObservedRunningTime="2025-10-11 10:53:22.593986106 +0000 UTC m=+1637.378412815" Oct 11 10:53:22.700436 master-2 kubenswrapper[4776]: I1011 10:53:22.700378 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 11 10:53:23.558647 master-2 kubenswrapper[4776]: I1011 10:53:23.558591 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"773e37f4-d372-40a8-936f-5b148ca7dabf","Type":"ContainerStarted","Data":"43156a111bf2bd6fe0db4b57580b70f4a1d5677b297e3635ff0ea32d04252bf0"} Oct 11 10:53:23.561084 master-2 kubenswrapper[4776]: I1011 10:53:23.561045 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8d885df4-18cc-401a-8226-cd3d17b3f770","Type":"ContainerStarted","Data":"8f461b244015850ac05e373cc1ad030b00b88df90fa2baec16784ede4b40d659"} Oct 11 10:53:23.562358 master-2 kubenswrapper[4776]: I1011 10:53:23.562320 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3b7d3c19-b426-4026-94db-329447ffb1d5","Type":"ContainerStarted","Data":"06686e3fb84c12253dcb579f99feb711eb43f473276d120edabaa14c17179836"} Oct 11 10:53:23.594998 master-2 kubenswrapper[4776]: I1011 10:53:23.594919 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=22.732869398 podStartE2EDuration="23.594900486s" podCreationTimestamp="2025-10-11 10:53:00 +0000 UTC" firstStartedPulling="2025-10-11 10:53:21.310469773 +0000 UTC m=+1636.094896482" lastFinishedPulling="2025-10-11 10:53:22.172500861 +0000 UTC m=+1636.956927570" observedRunningTime="2025-10-11 10:53:23.593565069 +0000 UTC m=+1638.377991778" watchObservedRunningTime="2025-10-11 10:53:23.594900486 +0000 UTC m=+1638.379327195" Oct 11 10:53:23.777368 master-2 kubenswrapper[4776]: I1011 10:53:23.777225 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:23.836639 master-2 kubenswrapper[4776]: I1011 10:53:23.836288 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:23.836639 master-2 kubenswrapper[4776]: I1011 10:53:23.836339 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:24.571294 master-2 kubenswrapper[4776]: I1011 10:53:24.571238 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3b7d3c19-b426-4026-94db-329447ffb1d5","Type":"ContainerStarted","Data":"8bafe9e73c48a6f2ca91ecc18aded5fe6f748cd8cc1be007f768ae6e593a7052"} Oct 11 10:53:24.571294 master-2 kubenswrapper[4776]: I1011 10:53:24.571299 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3b7d3c19-b426-4026-94db-329447ffb1d5","Type":"ContainerStarted","Data":"4663b1bc12048409eb6f262f8d66bee58e0b78548547d2517a69cf91f411b327"} Oct 11 10:53:24.598622 master-2 kubenswrapper[4776]: I1011 10:53:24.598563 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=14.645665920999999 podStartE2EDuration="15.598546789s" podCreationTimestamp="2025-10-11 10:53:09 +0000 UTC" firstStartedPulling="2025-10-11 10:53:23.010810836 +0000 UTC m=+1637.795237545" lastFinishedPulling="2025-10-11 10:53:23.963691704 +0000 UTC m=+1638.748118413" observedRunningTime="2025-10-11 10:53:24.59638537 +0000 UTC m=+1639.380812079" watchObservedRunningTime="2025-10-11 10:53:24.598546789 +0000 UTC m=+1639.382973498" Oct 11 10:53:25.171114 master-2 kubenswrapper[4776]: I1011 10:53:25.171057 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:25.592966 master-2 kubenswrapper[4776]: I1011 10:53:25.592914 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8d885df4-18cc-401a-8226-cd3d17b3f770","Type":"ContainerStarted","Data":"1d059243776dcb16eb25efaa222ac0ecf3d460ee9115a462cc85e773e5ee73c3"} Oct 11 10:53:25.593783 master-2 kubenswrapper[4776]: I1011 10:53:25.593664 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Oct 11 10:53:25.628040 master-2 kubenswrapper[4776]: I1011 10:53:25.627872 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=8.071524966 podStartE2EDuration="28.627852767s" podCreationTimestamp="2025-10-11 10:52:57 +0000 UTC" firstStartedPulling="2025-10-11 10:53:02.501048986 +0000 UTC m=+1617.285475695" lastFinishedPulling="2025-10-11 10:53:23.057376777 +0000 UTC m=+1637.841803496" observedRunningTime="2025-10-11 10:53:25.619958564 +0000 UTC m=+1640.404385273" watchObservedRunningTime="2025-10-11 10:53:25.627852767 +0000 UTC m=+1640.412279476" Oct 11 10:53:25.777510 master-2 kubenswrapper[4776]: I1011 10:53:25.777446 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:25.931254 master-2 kubenswrapper[4776]: I1011 10:53:25.931205 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-1" Oct 11 10:53:26.600411 master-2 kubenswrapper[4776]: I1011 10:53:26.600313 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Oct 11 10:53:26.814693 master-2 kubenswrapper[4776]: I1011 10:53:26.814530 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:27.170842 master-2 kubenswrapper[4776]: I1011 10:53:27.170757 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:27.641406 master-2 kubenswrapper[4776]: I1011 10:53:27.641357 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Oct 11 10:53:28.210077 master-2 kubenswrapper[4776]: I1011 10:53:28.210005 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:31.261770 master-2 kubenswrapper[4776]: I1011 10:53:31.261548 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-2" Oct 11 10:53:31.261770 master-2 kubenswrapper[4776]: I1011 10:53:31.261610 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-2" Oct 11 10:53:31.314387 master-2 kubenswrapper[4776]: I1011 10:53:31.314339 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-2" Oct 11 10:53:31.683175 master-2 kubenswrapper[4776]: I1011 10:53:31.683073 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-2" Oct 11 10:53:32.366988 master-2 kubenswrapper[4776]: I1011 10:53:32.366919 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 11 10:53:35.692058 master-2 kubenswrapper[4776]: I1011 10:53:35.691993 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-d7297"] Oct 11 10:53:35.693312 master-2 kubenswrapper[4776]: I1011 10:53:35.693294 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7297" Oct 11 10:53:35.716188 master-2 kubenswrapper[4776]: I1011 10:53:35.716133 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-d7297"] Oct 11 10:53:35.765610 master-2 kubenswrapper[4776]: I1011 10:53:35.765548 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpqcd\" (UniqueName: \"kubernetes.io/projected/829885e7-9e39-447e-a4f0-2ac128443d04-kube-api-access-zpqcd\") pod \"keystone-db-create-d7297\" (UID: \"829885e7-9e39-447e-a4f0-2ac128443d04\") " pod="openstack/keystone-db-create-d7297" Oct 11 10:53:35.866985 master-2 kubenswrapper[4776]: I1011 10:53:35.866841 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpqcd\" (UniqueName: \"kubernetes.io/projected/829885e7-9e39-447e-a4f0-2ac128443d04-kube-api-access-zpqcd\") pod \"keystone-db-create-d7297\" (UID: \"829885e7-9e39-447e-a4f0-2ac128443d04\") " pod="openstack/keystone-db-create-d7297" Oct 11 10:53:35.889928 master-2 kubenswrapper[4776]: I1011 10:53:35.889864 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpqcd\" (UniqueName: \"kubernetes.io/projected/829885e7-9e39-447e-a4f0-2ac128443d04-kube-api-access-zpqcd\") pod \"keystone-db-create-d7297\" (UID: \"829885e7-9e39-447e-a4f0-2ac128443d04\") " pod="openstack/keystone-db-create-d7297" Oct 11 10:53:36.053600 master-2 kubenswrapper[4776]: I1011 10:53:36.053411 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7297" Oct 11 10:53:36.081425 master-2 kubenswrapper[4776]: I1011 10:53:36.081346 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-v9tlh"] Oct 11 10:53:36.082373 master-2 kubenswrapper[4776]: I1011 10:53:36.082328 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:36.195710 master-2 kubenswrapper[4776]: I1011 10:53:36.195461 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v9tlh"] Oct 11 10:53:36.272912 master-2 kubenswrapper[4776]: I1011 10:53:36.272824 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6tvr\" (UniqueName: \"kubernetes.io/projected/1a7cb456-8a0b-4e56-9dc5-93b488813f77-kube-api-access-g6tvr\") pod \"placement-db-create-v9tlh\" (UID: \"1a7cb456-8a0b-4e56-9dc5-93b488813f77\") " pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:36.374070 master-2 kubenswrapper[4776]: I1011 10:53:36.374021 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6tvr\" (UniqueName: \"kubernetes.io/projected/1a7cb456-8a0b-4e56-9dc5-93b488813f77-kube-api-access-g6tvr\") pod \"placement-db-create-v9tlh\" (UID: \"1a7cb456-8a0b-4e56-9dc5-93b488813f77\") " pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:36.494482 master-2 kubenswrapper[4776]: I1011 10:53:36.494025 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6tvr\" (UniqueName: \"kubernetes.io/projected/1a7cb456-8a0b-4e56-9dc5-93b488813f77-kube-api-access-g6tvr\") pod \"placement-db-create-v9tlh\" (UID: \"1a7cb456-8a0b-4e56-9dc5-93b488813f77\") " pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:36.506363 master-2 kubenswrapper[4776]: I1011 10:53:36.506304 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-d7297"] Oct 11 10:53:36.671811 master-2 kubenswrapper[4776]: I1011 10:53:36.671759 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7297" event={"ID":"829885e7-9e39-447e-a4f0-2ac128443d04","Type":"ContainerStarted","Data":"c3918c1895cf6e1513c6d1aad5e8a7d95dd81e5fff0ab3e9dc4c012a2a71eedd"} Oct 11 10:53:36.768655 master-2 kubenswrapper[4776]: I1011 10:53:36.768473 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:37.267358 master-2 kubenswrapper[4776]: I1011 10:53:37.267310 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v9tlh"] Oct 11 10:53:37.270216 master-2 kubenswrapper[4776]: W1011 10:53:37.270177 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a7cb456_8a0b_4e56_9dc5_93b488813f77.slice/crio-a1eaf76101cdb0c957000d8935800012f9d2b669a931ab2203ec03e40847d3ae WatchSource:0}: Error finding container a1eaf76101cdb0c957000d8935800012f9d2b669a931ab2203ec03e40847d3ae: Status 404 returned error can't find the container with id a1eaf76101cdb0c957000d8935800012f9d2b669a931ab2203ec03e40847d3ae Oct 11 10:53:37.681060 master-2 kubenswrapper[4776]: I1011 10:53:37.680941 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7297" event={"ID":"829885e7-9e39-447e-a4f0-2ac128443d04","Type":"ContainerStarted","Data":"6868226996ff13456bac113fdafadf8a4b2b3ab56a80c2fc96fb7d2ab46abffa"} Oct 11 10:53:37.683069 master-2 kubenswrapper[4776]: I1011 10:53:37.682992 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v9tlh" event={"ID":"1a7cb456-8a0b-4e56-9dc5-93b488813f77","Type":"ContainerStarted","Data":"c49bb5e7b7b29373f91460103b137215ef53ea437154f26d2c8b1bb8b44e90c5"} Oct 11 10:53:37.683069 master-2 kubenswrapper[4776]: I1011 10:53:37.683069 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v9tlh" event={"ID":"1a7cb456-8a0b-4e56-9dc5-93b488813f77","Type":"ContainerStarted","Data":"a1eaf76101cdb0c957000d8935800012f9d2b669a931ab2203ec03e40847d3ae"} Oct 11 10:53:38.878277 master-2 kubenswrapper[4776]: I1011 10:53:38.878188 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-d7297" podStartSLOduration=3.8781667840000003 podStartE2EDuration="3.878166784s" podCreationTimestamp="2025-10-11 10:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:38.846581959 +0000 UTC m=+1653.631008678" watchObservedRunningTime="2025-10-11 10:53:38.878166784 +0000 UTC m=+1653.662593513" Oct 11 10:53:38.884460 master-2 kubenswrapper[4776]: I1011 10:53:38.884382 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-v9tlh" podStartSLOduration=2.884363192 podStartE2EDuration="2.884363192s" podCreationTimestamp="2025-10-11 10:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:38.878005599 +0000 UTC m=+1653.662432328" watchObservedRunningTime="2025-10-11 10:53:38.884363192 +0000 UTC m=+1653.668789901" Oct 11 10:53:41.532176 master-2 kubenswrapper[4776]: I1011 10:53:41.532105 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8qhqm" podUID="065373ca-8c0f-489c-a72e-4d1aee1263ba" containerName="ovn-controller" probeResult="failure" output=< Oct 11 10:53:41.532176 master-2 kubenswrapper[4776]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 11 10:53:41.532176 master-2 kubenswrapper[4776]: > Oct 11 10:53:41.712722 master-2 kubenswrapper[4776]: I1011 10:53:41.712616 4776 generic.go:334] "Generic (PLEG): container finished" podID="1a7cb456-8a0b-4e56-9dc5-93b488813f77" containerID="c49bb5e7b7b29373f91460103b137215ef53ea437154f26d2c8b1bb8b44e90c5" exitCode=0 Oct 11 10:53:41.712980 master-2 kubenswrapper[4776]: I1011 10:53:41.712742 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v9tlh" event={"ID":"1a7cb456-8a0b-4e56-9dc5-93b488813f77","Type":"ContainerDied","Data":"c49bb5e7b7b29373f91460103b137215ef53ea437154f26d2c8b1bb8b44e90c5"} Oct 11 10:53:41.715221 master-2 kubenswrapper[4776]: I1011 10:53:41.715173 4776 generic.go:334] "Generic (PLEG): container finished" podID="829885e7-9e39-447e-a4f0-2ac128443d04" containerID="6868226996ff13456bac113fdafadf8a4b2b3ab56a80c2fc96fb7d2ab46abffa" exitCode=0 Oct 11 10:53:41.715316 master-2 kubenswrapper[4776]: I1011 10:53:41.715221 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7297" event={"ID":"829885e7-9e39-447e-a4f0-2ac128443d04","Type":"ContainerDied","Data":"6868226996ff13456bac113fdafadf8a4b2b3ab56a80c2fc96fb7d2ab46abffa"} Oct 11 10:53:43.365742 master-2 kubenswrapper[4776]: I1011 10:53:43.365695 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:43.410438 master-2 kubenswrapper[4776]: I1011 10:53:43.409417 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6tvr\" (UniqueName: \"kubernetes.io/projected/1a7cb456-8a0b-4e56-9dc5-93b488813f77-kube-api-access-g6tvr\") pod \"1a7cb456-8a0b-4e56-9dc5-93b488813f77\" (UID: \"1a7cb456-8a0b-4e56-9dc5-93b488813f77\") " Oct 11 10:53:43.414824 master-2 kubenswrapper[4776]: I1011 10:53:43.414281 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7cb456-8a0b-4e56-9dc5-93b488813f77-kube-api-access-g6tvr" (OuterVolumeSpecName: "kube-api-access-g6tvr") pod "1a7cb456-8a0b-4e56-9dc5-93b488813f77" (UID: "1a7cb456-8a0b-4e56-9dc5-93b488813f77"). InnerVolumeSpecName "kube-api-access-g6tvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:43.511557 master-2 kubenswrapper[4776]: I1011 10:53:43.511229 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6tvr\" (UniqueName: \"kubernetes.io/projected/1a7cb456-8a0b-4e56-9dc5-93b488813f77-kube-api-access-g6tvr\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:43.728923 master-2 kubenswrapper[4776]: I1011 10:53:43.728875 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7297" Oct 11 10:53:43.736768 master-2 kubenswrapper[4776]: I1011 10:53:43.736635 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v9tlh" event={"ID":"1a7cb456-8a0b-4e56-9dc5-93b488813f77","Type":"ContainerDied","Data":"a1eaf76101cdb0c957000d8935800012f9d2b669a931ab2203ec03e40847d3ae"} Oct 11 10:53:43.736768 master-2 kubenswrapper[4776]: I1011 10:53:43.736722 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1eaf76101cdb0c957000d8935800012f9d2b669a931ab2203ec03e40847d3ae" Oct 11 10:53:43.736991 master-2 kubenswrapper[4776]: I1011 10:53:43.736645 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v9tlh" Oct 11 10:53:43.737900 master-2 kubenswrapper[4776]: I1011 10:53:43.737825 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7297" event={"ID":"829885e7-9e39-447e-a4f0-2ac128443d04","Type":"ContainerDied","Data":"c3918c1895cf6e1513c6d1aad5e8a7d95dd81e5fff0ab3e9dc4c012a2a71eedd"} Oct 11 10:53:43.737967 master-2 kubenswrapper[4776]: I1011 10:53:43.737899 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7297" Oct 11 10:53:43.738015 master-2 kubenswrapper[4776]: I1011 10:53:43.737902 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3918c1895cf6e1513c6d1aad5e8a7d95dd81e5fff0ab3e9dc4c012a2a71eedd" Oct 11 10:53:43.815012 master-2 kubenswrapper[4776]: I1011 10:53:43.814831 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpqcd\" (UniqueName: \"kubernetes.io/projected/829885e7-9e39-447e-a4f0-2ac128443d04-kube-api-access-zpqcd\") pod \"829885e7-9e39-447e-a4f0-2ac128443d04\" (UID: \"829885e7-9e39-447e-a4f0-2ac128443d04\") " Oct 11 10:53:43.817847 master-2 kubenswrapper[4776]: I1011 10:53:43.817778 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829885e7-9e39-447e-a4f0-2ac128443d04-kube-api-access-zpqcd" (OuterVolumeSpecName: "kube-api-access-zpqcd") pod "829885e7-9e39-447e-a4f0-2ac128443d04" (UID: "829885e7-9e39-447e-a4f0-2ac128443d04"). InnerVolumeSpecName "kube-api-access-zpqcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:43.916529 master-2 kubenswrapper[4776]: I1011 10:53:43.916465 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpqcd\" (UniqueName: \"kubernetes.io/projected/829885e7-9e39-447e-a4f0-2ac128443d04-kube-api-access-zpqcd\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:44.748867 master-2 kubenswrapper[4776]: I1011 10:53:44.748799 4776 generic.go:334] "Generic (PLEG): container finished" podID="914ac6d0-5a85-4b2d-b4d4-202def09b0d8" containerID="e2e689b6171c3c9555e3da0b9467cad86d6df71fe90a288a26e58488f8162ae3" exitCode=0 Oct 11 10:53:44.749610 master-2 kubenswrapper[4776]: I1011 10:53:44.748917 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"914ac6d0-5a85-4b2d-b4d4-202def09b0d8","Type":"ContainerDied","Data":"e2e689b6171c3c9555e3da0b9467cad86d6df71fe90a288a26e58488f8162ae3"} Oct 11 10:53:44.751025 master-2 kubenswrapper[4776]: I1011 10:53:44.750780 4776 generic.go:334] "Generic (PLEG): container finished" podID="5a8ba065-7ef6-4bab-b20a-3bb274c93fa0" containerID="95de2747230b1e6ce345bd8b5619b82e79500b7d95b612f55c02d195c5ea9860" exitCode=0 Oct 11 10:53:44.751025 master-2 kubenswrapper[4776]: I1011 10:53:44.750828 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-2" event={"ID":"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0","Type":"ContainerDied","Data":"95de2747230b1e6ce345bd8b5619b82e79500b7d95b612f55c02d195c5ea9860"} Oct 11 10:53:44.817124 master-2 kubenswrapper[4776]: I1011 10:53:44.817006 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-sc5rx"] Oct 11 10:53:44.817477 master-2 kubenswrapper[4776]: E1011 10:53:44.817447 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7cb456-8a0b-4e56-9dc5-93b488813f77" containerName="mariadb-database-create" Oct 11 10:53:44.817477 master-2 kubenswrapper[4776]: I1011 10:53:44.817471 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7cb456-8a0b-4e56-9dc5-93b488813f77" containerName="mariadb-database-create" Oct 11 10:53:44.817551 master-2 kubenswrapper[4776]: E1011 10:53:44.817499 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829885e7-9e39-447e-a4f0-2ac128443d04" containerName="mariadb-database-create" Oct 11 10:53:44.817551 master-2 kubenswrapper[4776]: I1011 10:53:44.817508 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="829885e7-9e39-447e-a4f0-2ac128443d04" containerName="mariadb-database-create" Oct 11 10:53:44.828994 master-2 kubenswrapper[4776]: I1011 10:53:44.828953 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="829885e7-9e39-447e-a4f0-2ac128443d04" containerName="mariadb-database-create" Oct 11 10:53:44.829053 master-2 kubenswrapper[4776]: I1011 10:53:44.828995 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7cb456-8a0b-4e56-9dc5-93b488813f77" containerName="mariadb-database-create" Oct 11 10:53:44.831540 master-2 kubenswrapper[4776]: I1011 10:53:44.831511 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:44.845497 master-2 kubenswrapper[4776]: I1011 10:53:44.839642 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 11 10:53:44.845497 master-2 kubenswrapper[4776]: I1011 10:53:44.840035 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 11 10:53:44.845497 master-2 kubenswrapper[4776]: I1011 10:53:44.840097 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 11 10:53:44.845497 master-2 kubenswrapper[4776]: I1011 10:53:44.840925 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 11 10:53:44.845497 master-2 kubenswrapper[4776]: I1011 10:53:44.844772 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sc5rx"] Oct 11 10:53:45.038351 master-2 kubenswrapper[4776]: I1011 10:53:45.038279 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02c36342-76bf-457d-804c-cc6420176307-etc-swift\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.038611 master-2 kubenswrapper[4776]: I1011 10:53:45.038428 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-dispersionconf\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.038611 master-2 kubenswrapper[4776]: I1011 10:53:45.038495 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-ring-data-devices\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.038611 master-2 kubenswrapper[4776]: I1011 10:53:45.038532 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjdjr\" (UniqueName: \"kubernetes.io/projected/02c36342-76bf-457d-804c-cc6420176307-kube-api-access-jjdjr\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.038611 master-2 kubenswrapper[4776]: I1011 10:53:45.038575 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-combined-ca-bundle\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.038824 master-2 kubenswrapper[4776]: I1011 10:53:45.038782 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-scripts\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.038867 master-2 kubenswrapper[4776]: I1011 10:53:45.038837 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-swiftconf\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141423 master-2 kubenswrapper[4776]: I1011 10:53:45.141280 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjdjr\" (UniqueName: \"kubernetes.io/projected/02c36342-76bf-457d-804c-cc6420176307-kube-api-access-jjdjr\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141423 master-2 kubenswrapper[4776]: I1011 10:53:45.141351 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-combined-ca-bundle\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141653 master-2 kubenswrapper[4776]: I1011 10:53:45.141429 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-scripts\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141653 master-2 kubenswrapper[4776]: I1011 10:53:45.141464 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-swiftconf\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141653 master-2 kubenswrapper[4776]: I1011 10:53:45.141499 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02c36342-76bf-457d-804c-cc6420176307-etc-swift\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141653 master-2 kubenswrapper[4776]: I1011 10:53:45.141555 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-dispersionconf\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.141653 master-2 kubenswrapper[4776]: I1011 10:53:45.141616 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-ring-data-devices\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.142157 master-2 kubenswrapper[4776]: I1011 10:53:45.142117 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02c36342-76bf-457d-804c-cc6420176307-etc-swift\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.142431 master-2 kubenswrapper[4776]: I1011 10:53:45.142405 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-ring-data-devices\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.142935 master-2 kubenswrapper[4776]: I1011 10:53:45.142903 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-scripts\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.144664 master-2 kubenswrapper[4776]: I1011 10:53:45.144621 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-dispersionconf\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.146023 master-2 kubenswrapper[4776]: I1011 10:53:45.145982 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-combined-ca-bundle\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.146240 master-2 kubenswrapper[4776]: I1011 10:53:45.146206 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-swiftconf\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.165151 master-2 kubenswrapper[4776]: I1011 10:53:45.165072 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjdjr\" (UniqueName: \"kubernetes.io/projected/02c36342-76bf-457d-804c-cc6420176307-kube-api-access-jjdjr\") pod \"swift-ring-rebalance-sc5rx\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.247008 master-2 kubenswrapper[4776]: I1011 10:53:45.246933 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:45.732176 master-2 kubenswrapper[4776]: W1011 10:53:45.732130 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02c36342_76bf_457d_804c_cc6420176307.slice/crio-de04857ed802def71b5901a8ca752c6b0be20d594b21cb7c6b16ee036e171a13 WatchSource:0}: Error finding container de04857ed802def71b5901a8ca752c6b0be20d594b21cb7c6b16ee036e171a13: Status 404 returned error can't find the container with id de04857ed802def71b5901a8ca752c6b0be20d594b21cb7c6b16ee036e171a13 Oct 11 10:53:45.745644 master-2 kubenswrapper[4776]: I1011 10:53:45.745611 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sc5rx"] Oct 11 10:53:45.765733 master-2 kubenswrapper[4776]: I1011 10:53:45.765612 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sc5rx" event={"ID":"02c36342-76bf-457d-804c-cc6420176307","Type":"ContainerStarted","Data":"de04857ed802def71b5901a8ca752c6b0be20d594b21cb7c6b16ee036e171a13"} Oct 11 10:53:45.769199 master-2 kubenswrapper[4776]: I1011 10:53:45.769124 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"914ac6d0-5a85-4b2d-b4d4-202def09b0d8","Type":"ContainerStarted","Data":"d7c3aa2bccdc7f11b2391184419d5b484fdaf3ce4012a302733c1e2ef52543ca"} Oct 11 10:53:45.769882 master-2 kubenswrapper[4776]: I1011 10:53:45.769844 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Oct 11 10:53:45.772943 master-2 kubenswrapper[4776]: I1011 10:53:45.772900 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-2" event={"ID":"5a8ba065-7ef6-4bab-b20a-3bb274c93fa0","Type":"ContainerStarted","Data":"d758fbf1cca948c7a73d92ffb31a3f8f4613783c271c227f599d4158dac754e6"} Oct 11 10:53:45.773272 master-2 kubenswrapper[4776]: I1011 10:53:45.773114 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:53:45.852951 master-2 kubenswrapper[4776]: I1011 10:53:45.852717 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=49.710245713 podStartE2EDuration="55.852696145s" podCreationTimestamp="2025-10-11 10:52:50 +0000 UTC" firstStartedPulling="2025-10-11 10:53:03.348224219 +0000 UTC m=+1618.132650928" lastFinishedPulling="2025-10-11 10:53:09.490674641 +0000 UTC m=+1624.275101360" observedRunningTime="2025-10-11 10:53:45.84476236 +0000 UTC m=+1660.629189079" watchObservedRunningTime="2025-10-11 10:53:45.852696145 +0000 UTC m=+1660.637122854" Oct 11 10:53:45.881172 master-2 kubenswrapper[4776]: I1011 10:53:45.880762 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-2" podStartSLOduration=54.880744065 podStartE2EDuration="54.880744065s" podCreationTimestamp="2025-10-11 10:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:45.875861412 +0000 UTC m=+1660.660288111" watchObservedRunningTime="2025-10-11 10:53:45.880744065 +0000 UTC m=+1660.665170774" Oct 11 10:53:46.533171 master-2 kubenswrapper[4776]: I1011 10:53:46.533012 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8qhqm" podUID="065373ca-8c0f-489c-a72e-4d1aee1263ba" containerName="ovn-controller" probeResult="failure" output=< Oct 11 10:53:46.533171 master-2 kubenswrapper[4776]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 11 10:53:46.533171 master-2 kubenswrapper[4776]: > Oct 11 10:53:46.580693 master-2 kubenswrapper[4776]: I1011 10:53:46.580621 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:46.610554 master-2 kubenswrapper[4776]: I1011 10:53:46.610446 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4m6km" Oct 11 10:53:49.816180 master-2 kubenswrapper[4776]: I1011 10:53:49.816118 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sc5rx" event={"ID":"02c36342-76bf-457d-804c-cc6420176307","Type":"ContainerStarted","Data":"0c5b403d67b7cebb505d1369362477def7d8aa2286a84930fcb8513bb4ad37ac"} Oct 11 10:53:49.851534 master-2 kubenswrapper[4776]: I1011 10:53:49.851397 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-sc5rx" podStartSLOduration=2.473593882 podStartE2EDuration="5.851368787s" podCreationTimestamp="2025-10-11 10:53:44 +0000 UTC" firstStartedPulling="2025-10-11 10:53:45.735352767 +0000 UTC m=+1660.519779476" lastFinishedPulling="2025-10-11 10:53:49.113127682 +0000 UTC m=+1663.897554381" observedRunningTime="2025-10-11 10:53:49.844806929 +0000 UTC m=+1664.629233648" watchObservedRunningTime="2025-10-11 10:53:49.851368787 +0000 UTC m=+1664.635795496" Oct 11 10:53:50.363011 master-2 kubenswrapper[4776]: I1011 10:53:50.362952 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:50.414074 master-2 kubenswrapper[4776]: I1011 10:53:50.414029 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 11 10:53:51.529551 master-2 kubenswrapper[4776]: I1011 10:53:51.529486 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8qhqm" podUID="065373ca-8c0f-489c-a72e-4d1aee1263ba" containerName="ovn-controller" probeResult="failure" output=< Oct 11 10:53:51.529551 master-2 kubenswrapper[4776]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 11 10:53:51.529551 master-2 kubenswrapper[4776]: > Oct 11 10:53:51.539464 master-2 kubenswrapper[4776]: I1011 10:53:51.539290 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8qhqm-config-wt655"] Oct 11 10:53:51.540569 master-2 kubenswrapper[4776]: I1011 10:53:51.540517 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.548141 master-2 kubenswrapper[4776]: I1011 10:53:51.546088 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 11 10:53:51.567669 master-2 kubenswrapper[4776]: I1011 10:53:51.567299 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8qhqm-config-wt655"] Oct 11 10:53:51.689660 master-2 kubenswrapper[4776]: I1011 10:53:51.689596 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-scripts\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.689892 master-2 kubenswrapper[4776]: I1011 10:53:51.689747 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlxnz\" (UniqueName: \"kubernetes.io/projected/36684b23-8b0f-409c-8b8b-c2402189f68e-kube-api-access-nlxnz\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.689892 master-2 kubenswrapper[4776]: I1011 10:53:51.689800 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.689892 master-2 kubenswrapper[4776]: I1011 10:53:51.689836 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-additional-scripts\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.689892 master-2 kubenswrapper[4776]: I1011 10:53:51.689864 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run-ovn\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.689892 master-2 kubenswrapper[4776]: I1011 10:53:51.689887 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-log-ovn\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791413 master-2 kubenswrapper[4776]: I1011 10:53:51.791302 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-scripts\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791413 master-2 kubenswrapper[4776]: I1011 10:53:51.791407 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlxnz\" (UniqueName: \"kubernetes.io/projected/36684b23-8b0f-409c-8b8b-c2402189f68e-kube-api-access-nlxnz\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791621 master-2 kubenswrapper[4776]: I1011 10:53:51.791449 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791621 master-2 kubenswrapper[4776]: I1011 10:53:51.791475 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-additional-scripts\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791621 master-2 kubenswrapper[4776]: I1011 10:53:51.791497 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run-ovn\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791621 master-2 kubenswrapper[4776]: I1011 10:53:51.791515 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-log-ovn\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791621 master-2 kubenswrapper[4776]: I1011 10:53:51.791620 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-log-ovn\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.791953 master-2 kubenswrapper[4776]: I1011 10:53:51.791929 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.792548 master-2 kubenswrapper[4776]: I1011 10:53:51.792515 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-additional-scripts\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.792597 master-2 kubenswrapper[4776]: I1011 10:53:51.792571 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run-ovn\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.793459 master-2 kubenswrapper[4776]: I1011 10:53:51.793414 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-scripts\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.817697 master-2 kubenswrapper[4776]: I1011 10:53:51.812464 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlxnz\" (UniqueName: \"kubernetes.io/projected/36684b23-8b0f-409c-8b8b-c2402189f68e-kube-api-access-nlxnz\") pod \"ovn-controller-8qhqm-config-wt655\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:51.892159 master-2 kubenswrapper[4776]: I1011 10:53:51.892088 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:52.312629 master-2 kubenswrapper[4776]: I1011 10:53:52.298920 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8qhqm-config-wt655"] Oct 11 10:53:52.312629 master-2 kubenswrapper[4776]: W1011 10:53:52.301444 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36684b23_8b0f_409c_8b8b_c2402189f68e.slice/crio-52d1a0b423f1130d7c8ac76ca9e53742c68d853d59ebefff64b3fc353e522a46 WatchSource:0}: Error finding container 52d1a0b423f1130d7c8ac76ca9e53742c68d853d59ebefff64b3fc353e522a46: Status 404 returned error can't find the container with id 52d1a0b423f1130d7c8ac76ca9e53742c68d853d59ebefff64b3fc353e522a46 Oct 11 10:53:52.600932 master-2 kubenswrapper[4776]: I1011 10:53:52.600807 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-rbc2v"] Oct 11 10:53:52.601493 master-2 kubenswrapper[4776]: I1011 10:53:52.601112 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerName="dnsmasq-dns" containerID="cri-o://21c4e7fcdc124283dbaf4b8ea241a8c8865c7c94e13f81aee9cfb5aab43aac68" gracePeriod=10 Oct 11 10:53:52.841698 master-2 kubenswrapper[4776]: I1011 10:53:52.840867 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-wt655" event={"ID":"36684b23-8b0f-409c-8b8b-c2402189f68e","Type":"ContainerStarted","Data":"33f6c7d3d49df3aa00341fc88f0a2426a5e9fdd61368c7621163e1d8e6740b52"} Oct 11 10:53:52.841698 master-2 kubenswrapper[4776]: I1011 10:53:52.840913 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-wt655" event={"ID":"36684b23-8b0f-409c-8b8b-c2402189f68e","Type":"ContainerStarted","Data":"52d1a0b423f1130d7c8ac76ca9e53742c68d853d59ebefff64b3fc353e522a46"} Oct 11 10:53:52.870102 master-2 kubenswrapper[4776]: I1011 10:53:52.868218 4776 generic.go:334] "Generic (PLEG): container finished" podID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerID="21c4e7fcdc124283dbaf4b8ea241a8c8865c7c94e13f81aee9cfb5aab43aac68" exitCode=0 Oct 11 10:53:52.870102 master-2 kubenswrapper[4776]: I1011 10:53:52.868320 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" event={"ID":"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5","Type":"ContainerDied","Data":"21c4e7fcdc124283dbaf4b8ea241a8c8865c7c94e13f81aee9cfb5aab43aac68"} Oct 11 10:53:52.878371 master-2 kubenswrapper[4776]: I1011 10:53:52.878222 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8qhqm-config-wt655" podStartSLOduration=1.878187667 podStartE2EDuration="1.878187667s" podCreationTimestamp="2025-10-11 10:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:53:52.871922557 +0000 UTC m=+1667.656349286" watchObservedRunningTime="2025-10-11 10:53:52.878187667 +0000 UTC m=+1667.662614366" Oct 11 10:53:53.407602 master-2 kubenswrapper[4776]: I1011 10:53:53.407553 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:53:53.532004 master-2 kubenswrapper[4776]: I1011 10:53:53.531954 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-dns-svc\") pod \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " Oct 11 10:53:53.532557 master-2 kubenswrapper[4776]: I1011 10:53:53.532529 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-config\") pod \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " Oct 11 10:53:53.532628 master-2 kubenswrapper[4776]: I1011 10:53:53.532617 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnm67\" (UniqueName: \"kubernetes.io/projected/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-kube-api-access-vnm67\") pod \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\" (UID: \"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5\") " Oct 11 10:53:53.547512 master-2 kubenswrapper[4776]: I1011 10:53:53.547452 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-kube-api-access-vnm67" (OuterVolumeSpecName: "kube-api-access-vnm67") pod "c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" (UID: "c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5"). InnerVolumeSpecName "kube-api-access-vnm67". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:53.564474 master-2 kubenswrapper[4776]: I1011 10:53:53.564436 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" (UID: "c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:53.565974 master-2 kubenswrapper[4776]: I1011 10:53:53.565924 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-config" (OuterVolumeSpecName: "config") pod "c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" (UID: "c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:53.635413 master-2 kubenswrapper[4776]: I1011 10:53:53.635282 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:53.635413 master-2 kubenswrapper[4776]: I1011 10:53:53.635333 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:53.635413 master-2 kubenswrapper[4776]: I1011 10:53:53.635344 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnm67\" (UniqueName: \"kubernetes.io/projected/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5-kube-api-access-vnm67\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:53.877718 master-2 kubenswrapper[4776]: I1011 10:53:53.875732 4776 generic.go:334] "Generic (PLEG): container finished" podID="36684b23-8b0f-409c-8b8b-c2402189f68e" containerID="33f6c7d3d49df3aa00341fc88f0a2426a5e9fdd61368c7621163e1d8e6740b52" exitCode=0 Oct 11 10:53:53.877718 master-2 kubenswrapper[4776]: I1011 10:53:53.875802 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-wt655" event={"ID":"36684b23-8b0f-409c-8b8b-c2402189f68e","Type":"ContainerDied","Data":"33f6c7d3d49df3aa00341fc88f0a2426a5e9fdd61368c7621163e1d8e6740b52"} Oct 11 10:53:53.878064 master-2 kubenswrapper[4776]: I1011 10:53:53.878022 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" event={"ID":"c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5","Type":"ContainerDied","Data":"f2516a6982d29372b482ef8dfc55f32264f7df4b11429f9906ad40bd48d5344a"} Oct 11 10:53:53.878064 master-2 kubenswrapper[4776]: I1011 10:53:53.878062 4776 scope.go:117] "RemoveContainer" containerID="21c4e7fcdc124283dbaf4b8ea241a8c8865c7c94e13f81aee9cfb5aab43aac68" Oct 11 10:53:53.878217 master-2 kubenswrapper[4776]: I1011 10:53:53.878184 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d565bb9-rbc2v" Oct 11 10:53:53.897857 master-2 kubenswrapper[4776]: I1011 10:53:53.897814 4776 scope.go:117] "RemoveContainer" containerID="c0ec6bd81c8aa0ea34befa100e7e8df08ded596440992a0b1a3ffb750f413afb" Oct 11 10:53:53.937073 master-2 kubenswrapper[4776]: I1011 10:53:53.936448 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-rbc2v"] Oct 11 10:53:53.938844 master-2 kubenswrapper[4776]: E1011 10:53:53.938313 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8f57d5f_9c0f_4081_8ed7_3ef2cad93bb5.slice/crio-f2516a6982d29372b482ef8dfc55f32264f7df4b11429f9906ad40bd48d5344a\": RecentStats: unable to find data in memory cache]" Oct 11 10:53:53.942345 master-2 kubenswrapper[4776]: I1011 10:53:53.942191 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86d565bb9-rbc2v"] Oct 11 10:53:54.069329 master-2 kubenswrapper[4776]: I1011 10:53:54.069263 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" path="/var/lib/kubelet/pods/c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5/volumes" Oct 11 10:53:54.127588 master-2 kubenswrapper[4776]: I1011 10:53:54.127521 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-848z9"] Oct 11 10:53:54.128043 master-2 kubenswrapper[4776]: E1011 10:53:54.128014 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerName="dnsmasq-dns" Oct 11 10:53:54.128043 master-2 kubenswrapper[4776]: I1011 10:53:54.128041 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerName="dnsmasq-dns" Oct 11 10:53:54.128170 master-2 kubenswrapper[4776]: E1011 10:53:54.128084 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerName="init" Oct 11 10:53:54.128170 master-2 kubenswrapper[4776]: I1011 10:53:54.128095 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerName="init" Oct 11 10:53:54.128312 master-2 kubenswrapper[4776]: I1011 10:53:54.128293 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f57d5f-9c0f-4081-8ed7-3ef2cad93bb5" containerName="dnsmasq-dns" Oct 11 10:53:54.129112 master-2 kubenswrapper[4776]: I1011 10:53:54.129082 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.131817 master-2 kubenswrapper[4776]: I1011 10:53:54.131781 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-config-data" Oct 11 10:53:54.143136 master-2 kubenswrapper[4776]: I1011 10:53:54.142442 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-combined-ca-bundle\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.143136 master-2 kubenswrapper[4776]: I1011 10:53:54.142519 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-config-data\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.143136 master-2 kubenswrapper[4776]: I1011 10:53:54.142591 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhdjz\" (UniqueName: \"kubernetes.io/projected/25099d7a-e434-48d2-a175-088e5ad2caf2-kube-api-access-jhdjz\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.143136 master-2 kubenswrapper[4776]: I1011 10:53:54.142615 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-db-sync-config-data\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.144990 master-2 kubenswrapper[4776]: I1011 10:53:54.144852 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-848z9"] Oct 11 10:53:54.243728 master-2 kubenswrapper[4776]: I1011 10:53:54.243638 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-combined-ca-bundle\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.243937 master-2 kubenswrapper[4776]: I1011 10:53:54.243758 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-config-data\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.243937 master-2 kubenswrapper[4776]: I1011 10:53:54.243827 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhdjz\" (UniqueName: \"kubernetes.io/projected/25099d7a-e434-48d2-a175-088e5ad2caf2-kube-api-access-jhdjz\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.243937 master-2 kubenswrapper[4776]: I1011 10:53:54.243851 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-db-sync-config-data\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.247324 master-2 kubenswrapper[4776]: I1011 10:53:54.247269 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-db-sync-config-data\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.247385 master-2 kubenswrapper[4776]: I1011 10:53:54.247341 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-config-data\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.247419 master-2 kubenswrapper[4776]: I1011 10:53:54.247344 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-combined-ca-bundle\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.264709 master-2 kubenswrapper[4776]: I1011 10:53:54.263963 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhdjz\" (UniqueName: \"kubernetes.io/projected/25099d7a-e434-48d2-a175-088e5ad2caf2-kube-api-access-jhdjz\") pod \"glance-db-sync-848z9\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " pod="openstack/glance-db-sync-848z9" Oct 11 10:53:54.448964 master-2 kubenswrapper[4776]: I1011 10:53:54.448883 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-848z9" Oct 11 10:53:55.013372 master-2 kubenswrapper[4776]: I1011 10:53:55.013317 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-848z9"] Oct 11 10:53:55.020612 master-2 kubenswrapper[4776]: W1011 10:53:55.020555 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25099d7a_e434_48d2_a175_088e5ad2caf2.slice/crio-0be39d2643cd63048b0bc49a71ba425692d3f2086de0e9779d98622fc7802a88 WatchSource:0}: Error finding container 0be39d2643cd63048b0bc49a71ba425692d3f2086de0e9779d98622fc7802a88: Status 404 returned error can't find the container with id 0be39d2643cd63048b0bc49a71ba425692d3f2086de0e9779d98622fc7802a88 Oct 11 10:53:55.565784 master-2 kubenswrapper[4776]: I1011 10:53:55.565737 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:55.677797 master-2 kubenswrapper[4776]: I1011 10:53:55.677716 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-scripts\") pod \"36684b23-8b0f-409c-8b8b-c2402189f68e\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " Oct 11 10:53:55.677797 master-2 kubenswrapper[4776]: I1011 10:53:55.677793 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run\") pod \"36684b23-8b0f-409c-8b8b-c2402189f68e\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " Oct 11 10:53:55.678130 master-2 kubenswrapper[4776]: I1011 10:53:55.677816 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run-ovn\") pod \"36684b23-8b0f-409c-8b8b-c2402189f68e\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " Oct 11 10:53:55.678130 master-2 kubenswrapper[4776]: I1011 10:53:55.677942 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "36684b23-8b0f-409c-8b8b-c2402189f68e" (UID: "36684b23-8b0f-409c-8b8b-c2402189f68e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:55.678130 master-2 kubenswrapper[4776]: I1011 10:53:55.677983 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-log-ovn\") pod \"36684b23-8b0f-409c-8b8b-c2402189f68e\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " Oct 11 10:53:55.678130 master-2 kubenswrapper[4776]: I1011 10:53:55.677957 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run" (OuterVolumeSpecName: "var-run") pod "36684b23-8b0f-409c-8b8b-c2402189f68e" (UID: "36684b23-8b0f-409c-8b8b-c2402189f68e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:55.678130 master-2 kubenswrapper[4776]: I1011 10:53:55.678043 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "36684b23-8b0f-409c-8b8b-c2402189f68e" (UID: "36684b23-8b0f-409c-8b8b-c2402189f68e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:55.678130 master-2 kubenswrapper[4776]: I1011 10:53:55.678063 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlxnz\" (UniqueName: \"kubernetes.io/projected/36684b23-8b0f-409c-8b8b-c2402189f68e-kube-api-access-nlxnz\") pod \"36684b23-8b0f-409c-8b8b-c2402189f68e\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " Oct 11 10:53:55.678399 master-2 kubenswrapper[4776]: I1011 10:53:55.678257 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-additional-scripts\") pod \"36684b23-8b0f-409c-8b8b-c2402189f68e\" (UID: \"36684b23-8b0f-409c-8b8b-c2402189f68e\") " Oct 11 10:53:55.679032 master-2 kubenswrapper[4776]: I1011 10:53:55.678967 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "36684b23-8b0f-409c-8b8b-c2402189f68e" (UID: "36684b23-8b0f-409c-8b8b-c2402189f68e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:55.679196 master-2 kubenswrapper[4776]: I1011 10:53:55.679168 4776 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:55.679242 master-2 kubenswrapper[4776]: I1011 10:53:55.679195 4776 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-run-ovn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:55.679274 master-2 kubenswrapper[4776]: I1011 10:53:55.679244 4776 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36684b23-8b0f-409c-8b8b-c2402189f68e-var-log-ovn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:55.679274 master-2 kubenswrapper[4776]: I1011 10:53:55.679257 4776 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-additional-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:55.680440 master-2 kubenswrapper[4776]: I1011 10:53:55.680337 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-scripts" (OuterVolumeSpecName: "scripts") pod "36684b23-8b0f-409c-8b8b-c2402189f68e" (UID: "36684b23-8b0f-409c-8b8b-c2402189f68e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:55.685364 master-2 kubenswrapper[4776]: I1011 10:53:55.685322 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36684b23-8b0f-409c-8b8b-c2402189f68e-kube-api-access-nlxnz" (OuterVolumeSpecName: "kube-api-access-nlxnz") pod "36684b23-8b0f-409c-8b8b-c2402189f68e" (UID: "36684b23-8b0f-409c-8b8b-c2402189f68e"). InnerVolumeSpecName "kube-api-access-nlxnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:55.780943 master-2 kubenswrapper[4776]: I1011 10:53:55.780887 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36684b23-8b0f-409c-8b8b-c2402189f68e-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:55.780943 master-2 kubenswrapper[4776]: I1011 10:53:55.780924 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlxnz\" (UniqueName: \"kubernetes.io/projected/36684b23-8b0f-409c-8b8b-c2402189f68e-kube-api-access-nlxnz\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:55.895109 master-2 kubenswrapper[4776]: I1011 10:53:55.895065 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-wt655" event={"ID":"36684b23-8b0f-409c-8b8b-c2402189f68e","Type":"ContainerDied","Data":"52d1a0b423f1130d7c8ac76ca9e53742c68d853d59ebefff64b3fc353e522a46"} Oct 11 10:53:55.895109 master-2 kubenswrapper[4776]: I1011 10:53:55.895106 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-wt655" Oct 11 10:53:55.895346 master-2 kubenswrapper[4776]: I1011 10:53:55.895134 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52d1a0b423f1130d7c8ac76ca9e53742c68d853d59ebefff64b3fc353e522a46" Oct 11 10:53:55.896617 master-2 kubenswrapper[4776]: I1011 10:53:55.896573 4776 generic.go:334] "Generic (PLEG): container finished" podID="02c36342-76bf-457d-804c-cc6420176307" containerID="0c5b403d67b7cebb505d1369362477def7d8aa2286a84930fcb8513bb4ad37ac" exitCode=0 Oct 11 10:53:55.896617 master-2 kubenswrapper[4776]: I1011 10:53:55.896605 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sc5rx" event={"ID":"02c36342-76bf-457d-804c-cc6420176307","Type":"ContainerDied","Data":"0c5b403d67b7cebb505d1369362477def7d8aa2286a84930fcb8513bb4ad37ac"} Oct 11 10:53:55.898162 master-2 kubenswrapper[4776]: I1011 10:53:55.898111 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-848z9" event={"ID":"25099d7a-e434-48d2-a175-088e5ad2caf2","Type":"ContainerStarted","Data":"0be39d2643cd63048b0bc49a71ba425692d3f2086de0e9779d98622fc7802a88"} Oct 11 10:53:55.981764 master-2 kubenswrapper[4776]: I1011 10:53:55.981586 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8qhqm-config-wt655"] Oct 11 10:53:55.988567 master-2 kubenswrapper[4776]: I1011 10:53:55.988506 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8qhqm-config-wt655"] Oct 11 10:53:56.072784 master-2 kubenswrapper[4776]: I1011 10:53:56.072517 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36684b23-8b0f-409c-8b8b-c2402189f68e" path="/var/lib/kubelet/pods/36684b23-8b0f-409c-8b8b-c2402189f68e/volumes" Oct 11 10:53:56.109557 master-2 kubenswrapper[4776]: I1011 10:53:56.109489 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8qhqm-config-pnhck"] Oct 11 10:53:56.110389 master-2 kubenswrapper[4776]: E1011 10:53:56.110354 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36684b23-8b0f-409c-8b8b-c2402189f68e" containerName="ovn-config" Oct 11 10:53:56.110389 master-2 kubenswrapper[4776]: I1011 10:53:56.110390 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="36684b23-8b0f-409c-8b8b-c2402189f68e" containerName="ovn-config" Oct 11 10:53:56.110587 master-2 kubenswrapper[4776]: I1011 10:53:56.110569 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="36684b23-8b0f-409c-8b8b-c2402189f68e" containerName="ovn-config" Oct 11 10:53:56.111431 master-2 kubenswrapper[4776]: I1011 10:53:56.111411 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.115936 master-2 kubenswrapper[4776]: I1011 10:53:56.115893 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 11 10:53:56.125589 master-2 kubenswrapper[4776]: I1011 10:53:56.125545 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8qhqm-config-pnhck"] Oct 11 10:53:56.289734 master-2 kubenswrapper[4776]: I1011 10:53:56.289614 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-additional-scripts\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.289734 master-2 kubenswrapper[4776]: I1011 10:53:56.289714 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpnvp\" (UniqueName: \"kubernetes.io/projected/bedd9a1a-d96f-49da-93c4-971885dafbfa-kube-api-access-kpnvp\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.290076 master-2 kubenswrapper[4776]: I1011 10:53:56.289994 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-scripts\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.290238 master-2 kubenswrapper[4776]: I1011 10:53:56.290203 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run-ovn\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.290371 master-2 kubenswrapper[4776]: I1011 10:53:56.290350 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.290462 master-2 kubenswrapper[4776]: I1011 10:53:56.290447 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-log-ovn\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391572 master-2 kubenswrapper[4776]: I1011 10:53:56.391509 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-log-ovn\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391572 master-2 kubenswrapper[4776]: I1011 10:53:56.391573 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-additional-scripts\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391907 master-2 kubenswrapper[4776]: I1011 10:53:56.391649 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpnvp\" (UniqueName: \"kubernetes.io/projected/bedd9a1a-d96f-49da-93c4-971885dafbfa-kube-api-access-kpnvp\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391907 master-2 kubenswrapper[4776]: I1011 10:53:56.391697 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-scripts\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391907 master-2 kubenswrapper[4776]: I1011 10:53:56.391720 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run-ovn\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391907 master-2 kubenswrapper[4776]: I1011 10:53:56.391759 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391907 master-2 kubenswrapper[4776]: I1011 10:53:56.391768 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-log-ovn\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.391907 master-2 kubenswrapper[4776]: I1011 10:53:56.391886 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.392188 master-2 kubenswrapper[4776]: I1011 10:53:56.391938 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run-ovn\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.392501 master-2 kubenswrapper[4776]: I1011 10:53:56.392439 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-additional-scripts\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.394153 master-2 kubenswrapper[4776]: I1011 10:53:56.394110 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-scripts\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.436742 master-2 kubenswrapper[4776]: I1011 10:53:56.436701 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpnvp\" (UniqueName: \"kubernetes.io/projected/bedd9a1a-d96f-49da-93c4-971885dafbfa-kube-api-access-kpnvp\") pod \"ovn-controller-8qhqm-config-pnhck\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.439830 master-2 kubenswrapper[4776]: I1011 10:53:56.439786 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:56.523962 master-2 kubenswrapper[4776]: I1011 10:53:56.523905 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-2" podUID="5a8ba065-7ef6-4bab-b20a-3bb274c93fa0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.128.0.122:5671: connect: connection refused" Oct 11 10:53:56.537107 master-2 kubenswrapper[4776]: I1011 10:53:56.537062 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-8qhqm" Oct 11 10:53:56.963838 master-2 kubenswrapper[4776]: I1011 10:53:56.963790 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8qhqm-config-pnhck"] Oct 11 10:53:56.964027 master-2 kubenswrapper[4776]: W1011 10:53:56.963959 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbedd9a1a_d96f_49da_93c4_971885dafbfa.slice/crio-c788c22ef2bb450434f745677a534f4ed876639c6e3b38f9a131643958abd1d0 WatchSource:0}: Error finding container c788c22ef2bb450434f745677a534f4ed876639c6e3b38f9a131643958abd1d0: Status 404 returned error can't find the container with id c788c22ef2bb450434f745677a534f4ed876639c6e3b38f9a131643958abd1d0 Oct 11 10:53:57.540603 master-2 kubenswrapper[4776]: I1011 10:53:57.540324 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631264 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-combined-ca-bundle\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631418 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02c36342-76bf-457d-804c-cc6420176307-etc-swift\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631446 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-ring-data-devices\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631492 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-scripts\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631517 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjdjr\" (UniqueName: \"kubernetes.io/projected/02c36342-76bf-457d-804c-cc6420176307-kube-api-access-jjdjr\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631574 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-dispersionconf\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.631667 master-2 kubenswrapper[4776]: I1011 10:53:57.631597 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-swiftconf\") pod \"02c36342-76bf-457d-804c-cc6420176307\" (UID: \"02c36342-76bf-457d-804c-cc6420176307\") " Oct 11 10:53:57.633698 master-2 kubenswrapper[4776]: I1011 10:53:57.633620 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:57.634176 master-2 kubenswrapper[4776]: I1011 10:53:57.634135 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c36342-76bf-457d-804c-cc6420176307-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:53:57.636781 master-2 kubenswrapper[4776]: I1011 10:53:57.636467 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c36342-76bf-457d-804c-cc6420176307-kube-api-access-jjdjr" (OuterVolumeSpecName: "kube-api-access-jjdjr") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "kube-api-access-jjdjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:57.641019 master-2 kubenswrapper[4776]: I1011 10:53:57.640985 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:53:57.653092 master-2 kubenswrapper[4776]: I1011 10:53:57.652142 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-scripts" (OuterVolumeSpecName: "scripts") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:57.654752 master-2 kubenswrapper[4776]: I1011 10:53:57.654721 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:53:57.658320 master-2 kubenswrapper[4776]: I1011 10:53:57.658267 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "02c36342-76bf-457d-804c-cc6420176307" (UID: "02c36342-76bf-457d-804c-cc6420176307"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:53:57.733632 master-2 kubenswrapper[4776]: I1011 10:53:57.733564 4776 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-dispersionconf\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.733632 master-2 kubenswrapper[4776]: I1011 10:53:57.733625 4776 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-swiftconf\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.733632 master-2 kubenswrapper[4776]: I1011 10:53:57.733634 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c36342-76bf-457d-804c-cc6420176307-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.733905 master-2 kubenswrapper[4776]: I1011 10:53:57.733644 4776 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02c36342-76bf-457d-804c-cc6420176307-etc-swift\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.733905 master-2 kubenswrapper[4776]: I1011 10:53:57.733653 4776 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-ring-data-devices\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.733905 master-2 kubenswrapper[4776]: I1011 10:53:57.733663 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02c36342-76bf-457d-804c-cc6420176307-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.733905 master-2 kubenswrapper[4776]: I1011 10:53:57.733691 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjdjr\" (UniqueName: \"kubernetes.io/projected/02c36342-76bf-457d-804c-cc6420176307-kube-api-access-jjdjr\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:57.922317 master-2 kubenswrapper[4776]: I1011 10:53:57.922255 4776 generic.go:334] "Generic (PLEG): container finished" podID="bedd9a1a-d96f-49da-93c4-971885dafbfa" containerID="4c31e081cf778b32f0c186718d90bb335248738e9e0b592d92f2f4e35cfd127e" exitCode=0 Oct 11 10:53:57.922577 master-2 kubenswrapper[4776]: I1011 10:53:57.922315 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-pnhck" event={"ID":"bedd9a1a-d96f-49da-93c4-971885dafbfa","Type":"ContainerDied","Data":"4c31e081cf778b32f0c186718d90bb335248738e9e0b592d92f2f4e35cfd127e"} Oct 11 10:53:57.922577 master-2 kubenswrapper[4776]: I1011 10:53:57.922374 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-pnhck" event={"ID":"bedd9a1a-d96f-49da-93c4-971885dafbfa","Type":"ContainerStarted","Data":"c788c22ef2bb450434f745677a534f4ed876639c6e3b38f9a131643958abd1d0"} Oct 11 10:53:57.924346 master-2 kubenswrapper[4776]: I1011 10:53:57.924299 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sc5rx" event={"ID":"02c36342-76bf-457d-804c-cc6420176307","Type":"ContainerDied","Data":"de04857ed802def71b5901a8ca752c6b0be20d594b21cb7c6b16ee036e171a13"} Oct 11 10:53:57.924346 master-2 kubenswrapper[4776]: I1011 10:53:57.924326 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sc5rx" Oct 11 10:53:57.924346 master-2 kubenswrapper[4776]: I1011 10:53:57.924344 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de04857ed802def71b5901a8ca752c6b0be20d594b21cb7c6b16ee036e171a13" Oct 11 10:53:59.658319 master-2 kubenswrapper[4776]: I1011 10:53:59.658281 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:53:59.665954 master-2 kubenswrapper[4776]: I1011 10:53:59.665919 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run\") pod \"bedd9a1a-d96f-49da-93c4-971885dafbfa\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " Oct 11 10:53:59.666089 master-2 kubenswrapper[4776]: I1011 10:53:59.665970 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run-ovn\") pod \"bedd9a1a-d96f-49da-93c4-971885dafbfa\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " Oct 11 10:53:59.666089 master-2 kubenswrapper[4776]: I1011 10:53:59.666010 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-scripts\") pod \"bedd9a1a-d96f-49da-93c4-971885dafbfa\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " Oct 11 10:53:59.666089 master-2 kubenswrapper[4776]: I1011 10:53:59.666022 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run" (OuterVolumeSpecName: "var-run") pod "bedd9a1a-d96f-49da-93c4-971885dafbfa" (UID: "bedd9a1a-d96f-49da-93c4-971885dafbfa"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:59.666089 master-2 kubenswrapper[4776]: I1011 10:53:59.666048 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bedd9a1a-d96f-49da-93c4-971885dafbfa" (UID: "bedd9a1a-d96f-49da-93c4-971885dafbfa"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:59.666089 master-2 kubenswrapper[4776]: I1011 10:53:59.666094 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-additional-scripts\") pod \"bedd9a1a-d96f-49da-93c4-971885dafbfa\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " Oct 11 10:53:59.666260 master-2 kubenswrapper[4776]: I1011 10:53:59.666135 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-log-ovn\") pod \"bedd9a1a-d96f-49da-93c4-971885dafbfa\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " Oct 11 10:53:59.666260 master-2 kubenswrapper[4776]: I1011 10:53:59.666200 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpnvp\" (UniqueName: \"kubernetes.io/projected/bedd9a1a-d96f-49da-93c4-971885dafbfa-kube-api-access-kpnvp\") pod \"bedd9a1a-d96f-49da-93c4-971885dafbfa\" (UID: \"bedd9a1a-d96f-49da-93c4-971885dafbfa\") " Oct 11 10:53:59.666325 master-2 kubenswrapper[4776]: I1011 10:53:59.666273 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bedd9a1a-d96f-49da-93c4-971885dafbfa" (UID: "bedd9a1a-d96f-49da-93c4-971885dafbfa"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:53:59.666518 master-2 kubenswrapper[4776]: I1011 10:53:59.666497 4776 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-log-ovn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:59.666518 master-2 kubenswrapper[4776]: I1011 10:53:59.666515 4776 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:59.666596 master-2 kubenswrapper[4776]: I1011 10:53:59.666525 4776 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bedd9a1a-d96f-49da-93c4-971885dafbfa-var-run-ovn\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:59.666870 master-2 kubenswrapper[4776]: I1011 10:53:59.666839 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bedd9a1a-d96f-49da-93c4-971885dafbfa" (UID: "bedd9a1a-d96f-49da-93c4-971885dafbfa"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:59.667202 master-2 kubenswrapper[4776]: I1011 10:53:59.667161 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-scripts" (OuterVolumeSpecName: "scripts") pod "bedd9a1a-d96f-49da-93c4-971885dafbfa" (UID: "bedd9a1a-d96f-49da-93c4-971885dafbfa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:53:59.670872 master-2 kubenswrapper[4776]: I1011 10:53:59.670843 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bedd9a1a-d96f-49da-93c4-971885dafbfa-kube-api-access-kpnvp" (OuterVolumeSpecName: "kube-api-access-kpnvp") pod "bedd9a1a-d96f-49da-93c4-971885dafbfa" (UID: "bedd9a1a-d96f-49da-93c4-971885dafbfa"). InnerVolumeSpecName "kube-api-access-kpnvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:53:59.770452 master-2 kubenswrapper[4776]: I1011 10:53:59.770392 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:59.770452 master-2 kubenswrapper[4776]: I1011 10:53:59.770443 4776 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bedd9a1a-d96f-49da-93c4-971885dafbfa-additional-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:59.770452 master-2 kubenswrapper[4776]: I1011 10:53:59.770459 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpnvp\" (UniqueName: \"kubernetes.io/projected/bedd9a1a-d96f-49da-93c4-971885dafbfa-kube-api-access-kpnvp\") on node \"master-2\" DevicePath \"\"" Oct 11 10:53:59.943841 master-2 kubenswrapper[4776]: I1011 10:53:59.943801 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8qhqm-config-pnhck" event={"ID":"bedd9a1a-d96f-49da-93c4-971885dafbfa","Type":"ContainerDied","Data":"c788c22ef2bb450434f745677a534f4ed876639c6e3b38f9a131643958abd1d0"} Oct 11 10:53:59.943841 master-2 kubenswrapper[4776]: I1011 10:53:59.943839 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c788c22ef2bb450434f745677a534f4ed876639c6e3b38f9a131643958abd1d0" Oct 11 10:53:59.944126 master-2 kubenswrapper[4776]: I1011 10:53:59.943891 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8qhqm-config-pnhck" Oct 11 10:54:00.781682 master-2 kubenswrapper[4776]: I1011 10:54:00.781585 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8qhqm-config-pnhck"] Oct 11 10:54:00.788606 master-2 kubenswrapper[4776]: I1011 10:54:00.788520 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8qhqm-config-pnhck"] Oct 11 10:54:02.066629 master-2 kubenswrapper[4776]: I1011 10:54:02.066570 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bedd9a1a-d96f-49da-93c4-971885dafbfa" path="/var/lib/kubelet/pods/bedd9a1a-d96f-49da-93c4-971885dafbfa/volumes" Oct 11 10:54:02.735949 master-2 kubenswrapper[4776]: I1011 10:54:02.735881 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Oct 11 10:54:06.525419 master-2 kubenswrapper[4776]: I1011 10:54:06.525319 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-2" Oct 11 10:54:07.018305 master-2 kubenswrapper[4776]: I1011 10:54:07.017543 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-848z9" event={"ID":"25099d7a-e434-48d2-a175-088e5ad2caf2","Type":"ContainerStarted","Data":"f7b2a1a1f6cfb4760b3cd06bae0a54d2a7eb0b82c31eb466cd4d45304c8c9826"} Oct 11 10:54:07.049751 master-2 kubenswrapper[4776]: I1011 10:54:07.049540 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-848z9" podStartSLOduration=1.875301651 podStartE2EDuration="13.049518599s" podCreationTimestamp="2025-10-11 10:53:54 +0000 UTC" firstStartedPulling="2025-10-11 10:53:55.023609614 +0000 UTC m=+1669.808036323" lastFinishedPulling="2025-10-11 10:54:06.197826562 +0000 UTC m=+1680.982253271" observedRunningTime="2025-10-11 10:54:07.042833588 +0000 UTC m=+1681.827260297" watchObservedRunningTime="2025-10-11 10:54:07.049518599 +0000 UTC m=+1681.833945308" Oct 11 10:54:09.147312 master-2 kubenswrapper[4776]: I1011 10:54:09.147200 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8dnfj"] Oct 11 10:54:09.147768 master-2 kubenswrapper[4776]: E1011 10:54:09.147542 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c36342-76bf-457d-804c-cc6420176307" containerName="swift-ring-rebalance" Oct 11 10:54:09.147768 master-2 kubenswrapper[4776]: I1011 10:54:09.147555 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c36342-76bf-457d-804c-cc6420176307" containerName="swift-ring-rebalance" Oct 11 10:54:09.147768 master-2 kubenswrapper[4776]: E1011 10:54:09.147572 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedd9a1a-d96f-49da-93c4-971885dafbfa" containerName="ovn-config" Oct 11 10:54:09.147768 master-2 kubenswrapper[4776]: I1011 10:54:09.147578 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedd9a1a-d96f-49da-93c4-971885dafbfa" containerName="ovn-config" Oct 11 10:54:09.147768 master-2 kubenswrapper[4776]: I1011 10:54:09.147734 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="bedd9a1a-d96f-49da-93c4-971885dafbfa" containerName="ovn-config" Oct 11 10:54:09.147768 master-2 kubenswrapper[4776]: I1011 10:54:09.147750 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c36342-76bf-457d-804c-cc6420176307" containerName="swift-ring-rebalance" Oct 11 10:54:09.148343 master-2 kubenswrapper[4776]: I1011 10:54:09.148322 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.151396 master-2 kubenswrapper[4776]: I1011 10:54:09.151323 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 11 10:54:09.151650 master-2 kubenswrapper[4776]: I1011 10:54:09.151385 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 11 10:54:09.152337 master-2 kubenswrapper[4776]: I1011 10:54:09.152311 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 11 10:54:09.171534 master-2 kubenswrapper[4776]: I1011 10:54:09.171462 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8dnfj"] Oct 11 10:54:09.251287 master-2 kubenswrapper[4776]: I1011 10:54:09.250776 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-config-data\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.251287 master-2 kubenswrapper[4776]: I1011 10:54:09.250868 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-combined-ca-bundle\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.251287 master-2 kubenswrapper[4776]: I1011 10:54:09.251191 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nqtk\" (UniqueName: \"kubernetes.io/projected/e873bed5-1a50-4fb0-81b1-2225f4893b28-kube-api-access-4nqtk\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.353600 master-2 kubenswrapper[4776]: I1011 10:54:09.353309 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-config-data\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.353889 master-2 kubenswrapper[4776]: I1011 10:54:09.353634 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-combined-ca-bundle\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.353889 master-2 kubenswrapper[4776]: I1011 10:54:09.353763 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nqtk\" (UniqueName: \"kubernetes.io/projected/e873bed5-1a50-4fb0-81b1-2225f4893b28-kube-api-access-4nqtk\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.358461 master-2 kubenswrapper[4776]: I1011 10:54:09.358386 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-combined-ca-bundle\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.362853 master-2 kubenswrapper[4776]: I1011 10:54:09.362795 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-config-data\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.374918 master-2 kubenswrapper[4776]: I1011 10:54:09.374866 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nqtk\" (UniqueName: \"kubernetes.io/projected/e873bed5-1a50-4fb0-81b1-2225f4893b28-kube-api-access-4nqtk\") pod \"keystone-db-sync-8dnfj\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.476345 master-2 kubenswrapper[4776]: I1011 10:54:09.476255 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:09.890936 master-2 kubenswrapper[4776]: I1011 10:54:09.890875 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8dnfj"] Oct 11 10:54:09.891409 master-2 kubenswrapper[4776]: W1011 10:54:09.891384 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode873bed5_1a50_4fb0_81b1_2225f4893b28.slice/crio-0b06641be085fcf01d7e46fe302bfe6e5b8dfbdf3e87a305d5995d2fa7ab62cb WatchSource:0}: Error finding container 0b06641be085fcf01d7e46fe302bfe6e5b8dfbdf3e87a305d5995d2fa7ab62cb: Status 404 returned error can't find the container with id 0b06641be085fcf01d7e46fe302bfe6e5b8dfbdf3e87a305d5995d2fa7ab62cb Oct 11 10:54:10.053218 master-2 kubenswrapper[4776]: I1011 10:54:10.053156 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8dnfj" event={"ID":"e873bed5-1a50-4fb0-81b1-2225f4893b28","Type":"ContainerStarted","Data":"0b06641be085fcf01d7e46fe302bfe6e5b8dfbdf3e87a305d5995d2fa7ab62cb"} Oct 11 10:54:13.566323 master-2 kubenswrapper[4776]: I1011 10:54:13.566243 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8c6bd7965-l58cd"] Oct 11 10:54:13.567977 master-2 kubenswrapper[4776]: I1011 10:54:13.567951 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.577841 master-2 kubenswrapper[4776]: I1011 10:54:13.572100 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:54:13.577841 master-2 kubenswrapper[4776]: I1011 10:54:13.572185 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 10:54:13.577841 master-2 kubenswrapper[4776]: I1011 10:54:13.572220 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:54:13.577841 master-2 kubenswrapper[4776]: I1011 10:54:13.572304 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 10:54:13.577841 master-2 kubenswrapper[4776]: I1011 10:54:13.572845 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:54:13.586340 master-2 kubenswrapper[4776]: I1011 10:54:13.584630 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6bd7965-l58cd"] Oct 11 10:54:13.635524 master-2 kubenswrapper[4776]: I1011 10:54:13.632828 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-config\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.635524 master-2 kubenswrapper[4776]: I1011 10:54:13.632957 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-svc\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.635524 master-2 kubenswrapper[4776]: I1011 10:54:13.632985 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.635524 master-2 kubenswrapper[4776]: I1011 10:54:13.633013 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.635524 master-2 kubenswrapper[4776]: I1011 10:54:13.633055 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.635524 master-2 kubenswrapper[4776]: I1011 10:54:13.633082 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndm6h\" (UniqueName: \"kubernetes.io/projected/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-kube-api-access-ndm6h\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.735096 master-2 kubenswrapper[4776]: I1011 10:54:13.734811 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.735096 master-2 kubenswrapper[4776]: I1011 10:54:13.734884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndm6h\" (UniqueName: \"kubernetes.io/projected/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-kube-api-access-ndm6h\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.735096 master-2 kubenswrapper[4776]: I1011 10:54:13.734930 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-config\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.735096 master-2 kubenswrapper[4776]: I1011 10:54:13.735008 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-svc\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.735096 master-2 kubenswrapper[4776]: I1011 10:54:13.735031 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.735096 master-2 kubenswrapper[4776]: I1011 10:54:13.735061 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.736145 master-2 kubenswrapper[4776]: I1011 10:54:13.736118 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.736726 master-2 kubenswrapper[4776]: I1011 10:54:13.736702 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-svc\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.737517 master-2 kubenswrapper[4776]: I1011 10:54:13.737370 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.738198 master-2 kubenswrapper[4776]: I1011 10:54:13.738146 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.739114 master-2 kubenswrapper[4776]: I1011 10:54:13.739052 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-config\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.765421 master-2 kubenswrapper[4776]: I1011 10:54:13.765362 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndm6h\" (UniqueName: \"kubernetes.io/projected/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-kube-api-access-ndm6h\") pod \"dnsmasq-dns-8c6bd7965-l58cd\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:13.906307 master-2 kubenswrapper[4776]: I1011 10:54:13.906258 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:14.773937 master-2 kubenswrapper[4776]: I1011 10:54:14.773899 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6bd7965-l58cd"] Oct 11 10:54:15.091920 master-2 kubenswrapper[4776]: I1011 10:54:15.091859 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" event={"ID":"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb","Type":"ContainerStarted","Data":"10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b"} Oct 11 10:54:15.091920 master-2 kubenswrapper[4776]: I1011 10:54:15.091923 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" event={"ID":"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb","Type":"ContainerStarted","Data":"2ec4ba703e7155177399cd4035e4698136a2123a7fc01c195d9d7ff62132b49e"} Oct 11 10:54:15.093109 master-2 kubenswrapper[4776]: I1011 10:54:15.093053 4776 generic.go:334] "Generic (PLEG): container finished" podID="25099d7a-e434-48d2-a175-088e5ad2caf2" containerID="f7b2a1a1f6cfb4760b3cd06bae0a54d2a7eb0b82c31eb466cd4d45304c8c9826" exitCode=0 Oct 11 10:54:15.093184 master-2 kubenswrapper[4776]: I1011 10:54:15.093133 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-848z9" event={"ID":"25099d7a-e434-48d2-a175-088e5ad2caf2","Type":"ContainerDied","Data":"f7b2a1a1f6cfb4760b3cd06bae0a54d2a7eb0b82c31eb466cd4d45304c8c9826"} Oct 11 10:54:15.094419 master-2 kubenswrapper[4776]: I1011 10:54:15.094390 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8dnfj" event={"ID":"e873bed5-1a50-4fb0-81b1-2225f4893b28","Type":"ContainerStarted","Data":"9cbf7752342665c7e92852ff9e1ea1e5f0d5dc7a3ede8988348adb50918c085e"} Oct 11 10:54:15.206802 master-2 kubenswrapper[4776]: I1011 10:54:15.206720 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8dnfj" podStartSLOduration=1.7169767500000002 podStartE2EDuration="6.206703141s" podCreationTimestamp="2025-10-11 10:54:09 +0000 UTC" firstStartedPulling="2025-10-11 10:54:09.893648201 +0000 UTC m=+1684.678074900" lastFinishedPulling="2025-10-11 10:54:14.383374582 +0000 UTC m=+1689.167801291" observedRunningTime="2025-10-11 10:54:15.201201662 +0000 UTC m=+1689.985628371" watchObservedRunningTime="2025-10-11 10:54:15.206703141 +0000 UTC m=+1689.991129850" Oct 11 10:54:16.102503 master-2 kubenswrapper[4776]: I1011 10:54:16.102359 4776 generic.go:334] "Generic (PLEG): container finished" podID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerID="10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b" exitCode=0 Oct 11 10:54:16.102503 master-2 kubenswrapper[4776]: I1011 10:54:16.102419 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" event={"ID":"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb","Type":"ContainerDied","Data":"10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b"} Oct 11 10:54:17.020489 master-2 kubenswrapper[4776]: I1011 10:54:17.020434 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-848z9" Oct 11 10:54:17.102690 master-2 kubenswrapper[4776]: I1011 10:54:17.102605 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-config-data\") pod \"25099d7a-e434-48d2-a175-088e5ad2caf2\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " Oct 11 10:54:17.103292 master-2 kubenswrapper[4776]: I1011 10:54:17.102723 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhdjz\" (UniqueName: \"kubernetes.io/projected/25099d7a-e434-48d2-a175-088e5ad2caf2-kube-api-access-jhdjz\") pod \"25099d7a-e434-48d2-a175-088e5ad2caf2\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " Oct 11 10:54:17.103292 master-2 kubenswrapper[4776]: I1011 10:54:17.102824 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-db-sync-config-data\") pod \"25099d7a-e434-48d2-a175-088e5ad2caf2\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " Oct 11 10:54:17.103292 master-2 kubenswrapper[4776]: I1011 10:54:17.102874 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-combined-ca-bundle\") pod \"25099d7a-e434-48d2-a175-088e5ad2caf2\" (UID: \"25099d7a-e434-48d2-a175-088e5ad2caf2\") " Oct 11 10:54:17.107520 master-2 kubenswrapper[4776]: I1011 10:54:17.107467 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "25099d7a-e434-48d2-a175-088e5ad2caf2" (UID: "25099d7a-e434-48d2-a175-088e5ad2caf2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:17.107778 master-2 kubenswrapper[4776]: I1011 10:54:17.107739 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25099d7a-e434-48d2-a175-088e5ad2caf2-kube-api-access-jhdjz" (OuterVolumeSpecName: "kube-api-access-jhdjz") pod "25099d7a-e434-48d2-a175-088e5ad2caf2" (UID: "25099d7a-e434-48d2-a175-088e5ad2caf2"). InnerVolumeSpecName "kube-api-access-jhdjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:17.114729 master-2 kubenswrapper[4776]: I1011 10:54:17.114626 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" event={"ID":"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb","Type":"ContainerStarted","Data":"04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced"} Oct 11 10:54:17.114958 master-2 kubenswrapper[4776]: I1011 10:54:17.114932 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:17.117312 master-2 kubenswrapper[4776]: I1011 10:54:17.116636 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-848z9" event={"ID":"25099d7a-e434-48d2-a175-088e5ad2caf2","Type":"ContainerDied","Data":"0be39d2643cd63048b0bc49a71ba425692d3f2086de0e9779d98622fc7802a88"} Oct 11 10:54:17.117312 master-2 kubenswrapper[4776]: I1011 10:54:17.116663 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be39d2643cd63048b0bc49a71ba425692d3f2086de0e9779d98622fc7802a88" Oct 11 10:54:17.117312 master-2 kubenswrapper[4776]: I1011 10:54:17.116742 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-848z9" Oct 11 10:54:17.123553 master-2 kubenswrapper[4776]: I1011 10:54:17.123468 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25099d7a-e434-48d2-a175-088e5ad2caf2" (UID: "25099d7a-e434-48d2-a175-088e5ad2caf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:17.143764 master-2 kubenswrapper[4776]: I1011 10:54:17.143617 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-config-data" (OuterVolumeSpecName: "config-data") pod "25099d7a-e434-48d2-a175-088e5ad2caf2" (UID: "25099d7a-e434-48d2-a175-088e5ad2caf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:17.144433 master-2 kubenswrapper[4776]: I1011 10:54:17.144331 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" podStartSLOduration=4.14431242 podStartE2EDuration="4.14431242s" podCreationTimestamp="2025-10-11 10:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:17.142127921 +0000 UTC m=+1691.926554630" watchObservedRunningTime="2025-10-11 10:54:17.14431242 +0000 UTC m=+1691.928739149" Oct 11 10:54:17.204701 master-2 kubenswrapper[4776]: I1011 10:54:17.204551 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:17.204701 master-2 kubenswrapper[4776]: I1011 10:54:17.204592 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhdjz\" (UniqueName: \"kubernetes.io/projected/25099d7a-e434-48d2-a175-088e5ad2caf2-kube-api-access-jhdjz\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:17.204701 master-2 kubenswrapper[4776]: I1011 10:54:17.204604 4776 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-db-sync-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:17.204701 master-2 kubenswrapper[4776]: I1011 10:54:17.204612 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25099d7a-e434-48d2-a175-088e5ad2caf2-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:17.704596 master-2 kubenswrapper[4776]: I1011 10:54:17.704536 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8c6bd7965-l58cd"] Oct 11 10:54:17.739520 master-2 kubenswrapper[4776]: I1011 10:54:17.739006 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb5fd84f-98bcs"] Oct 11 10:54:17.739520 master-2 kubenswrapper[4776]: E1011 10:54:17.739319 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25099d7a-e434-48d2-a175-088e5ad2caf2" containerName="glance-db-sync" Oct 11 10:54:17.739520 master-2 kubenswrapper[4776]: I1011 10:54:17.739331 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="25099d7a-e434-48d2-a175-088e5ad2caf2" containerName="glance-db-sync" Oct 11 10:54:17.739795 master-2 kubenswrapper[4776]: I1011 10:54:17.739558 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="25099d7a-e434-48d2-a175-088e5ad2caf2" containerName="glance-db-sync" Oct 11 10:54:17.740755 master-2 kubenswrapper[4776]: I1011 10:54:17.740712 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.763945 master-2 kubenswrapper[4776]: I1011 10:54:17.763916 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb5fd84f-98bcs"] Oct 11 10:54:17.826322 master-2 kubenswrapper[4776]: I1011 10:54:17.826010 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-svc\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.826322 master-2 kubenswrapper[4776]: I1011 10:54:17.826074 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8llfr\" (UniqueName: \"kubernetes.io/projected/7da38dfb-a995-4843-a05a-351e5dc557ae-kube-api-access-8llfr\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.826322 master-2 kubenswrapper[4776]: I1011 10:54:17.826155 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-nb\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.826910 master-2 kubenswrapper[4776]: I1011 10:54:17.826685 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-config\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.826910 master-2 kubenswrapper[4776]: I1011 10:54:17.826721 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-swift-storage-0\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.826910 master-2 kubenswrapper[4776]: I1011 10:54:17.826760 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-sb\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.928883 master-2 kubenswrapper[4776]: I1011 10:54:17.928817 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-config\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.928883 master-2 kubenswrapper[4776]: I1011 10:54:17.928881 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-swift-storage-0\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.929154 master-2 kubenswrapper[4776]: I1011 10:54:17.928921 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-sb\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.929154 master-2 kubenswrapper[4776]: I1011 10:54:17.929025 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-svc\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.929154 master-2 kubenswrapper[4776]: I1011 10:54:17.929055 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8llfr\" (UniqueName: \"kubernetes.io/projected/7da38dfb-a995-4843-a05a-351e5dc557ae-kube-api-access-8llfr\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.929154 master-2 kubenswrapper[4776]: I1011 10:54:17.929108 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-nb\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.929917 master-2 kubenswrapper[4776]: I1011 10:54:17.929889 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-swift-storage-0\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.930060 master-2 kubenswrapper[4776]: I1011 10:54:17.930019 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-config\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.930108 master-2 kubenswrapper[4776]: I1011 10:54:17.930030 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-sb\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.930225 master-2 kubenswrapper[4776]: I1011 10:54:17.930188 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-svc\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.930263 master-2 kubenswrapper[4776]: I1011 10:54:17.930234 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-nb\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:17.965085 master-2 kubenswrapper[4776]: I1011 10:54:17.965039 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8llfr\" (UniqueName: \"kubernetes.io/projected/7da38dfb-a995-4843-a05a-351e5dc557ae-kube-api-access-8llfr\") pod \"dnsmasq-dns-bb5fd84f-98bcs\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:18.064694 master-2 kubenswrapper[4776]: I1011 10:54:18.064587 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:18.546629 master-2 kubenswrapper[4776]: I1011 10:54:18.546579 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb5fd84f-98bcs"] Oct 11 10:54:18.555038 master-2 kubenswrapper[4776]: W1011 10:54:18.549933 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7da38dfb_a995_4843_a05a_351e5dc557ae.slice/crio-585cb1a3dd3493a9c6dcca4a20afacedee65336427561c983a60785158d1da86 WatchSource:0}: Error finding container 585cb1a3dd3493a9c6dcca4a20afacedee65336427561c983a60785158d1da86: Status 404 returned error can't find the container with id 585cb1a3dd3493a9c6dcca4a20afacedee65336427561c983a60785158d1da86 Oct 11 10:54:19.134418 master-2 kubenswrapper[4776]: I1011 10:54:19.134309 4776 generic.go:334] "Generic (PLEG): container finished" podID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerID="456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe" exitCode=0 Oct 11 10:54:19.134418 master-2 kubenswrapper[4776]: I1011 10:54:19.134382 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" event={"ID":"7da38dfb-a995-4843-a05a-351e5dc557ae","Type":"ContainerDied","Data":"456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe"} Oct 11 10:54:19.135018 master-2 kubenswrapper[4776]: I1011 10:54:19.134418 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" event={"ID":"7da38dfb-a995-4843-a05a-351e5dc557ae","Type":"ContainerStarted","Data":"585cb1a3dd3493a9c6dcca4a20afacedee65336427561c983a60785158d1da86"} Oct 11 10:54:19.136302 master-2 kubenswrapper[4776]: I1011 10:54:19.136264 4776 generic.go:334] "Generic (PLEG): container finished" podID="e873bed5-1a50-4fb0-81b1-2225f4893b28" containerID="9cbf7752342665c7e92852ff9e1ea1e5f0d5dc7a3ede8988348adb50918c085e" exitCode=0 Oct 11 10:54:19.136355 master-2 kubenswrapper[4776]: I1011 10:54:19.136330 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8dnfj" event={"ID":"e873bed5-1a50-4fb0-81b1-2225f4893b28","Type":"ContainerDied","Data":"9cbf7752342665c7e92852ff9e1ea1e5f0d5dc7a3ede8988348adb50918c085e"} Oct 11 10:54:19.136474 master-2 kubenswrapper[4776]: I1011 10:54:19.136441 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerName="dnsmasq-dns" containerID="cri-o://04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced" gracePeriod=10 Oct 11 10:54:19.967106 master-2 kubenswrapper[4776]: I1011 10:54:19.967065 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:20.063527 master-2 kubenswrapper[4776]: I1011 10:54:20.063452 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-nb\") pod \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " Oct 11 10:54:20.063755 master-2 kubenswrapper[4776]: I1011 10:54:20.063541 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndm6h\" (UniqueName: \"kubernetes.io/projected/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-kube-api-access-ndm6h\") pod \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " Oct 11 10:54:20.063755 master-2 kubenswrapper[4776]: I1011 10:54:20.063577 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-config\") pod \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " Oct 11 10:54:20.063755 master-2 kubenswrapper[4776]: I1011 10:54:20.063643 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-svc\") pod \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " Oct 11 10:54:20.063857 master-2 kubenswrapper[4776]: I1011 10:54:20.063777 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-swift-storage-0\") pod \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " Oct 11 10:54:20.063857 master-2 kubenswrapper[4776]: I1011 10:54:20.063808 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-sb\") pod \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\" (UID: \"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb\") " Oct 11 10:54:20.075526 master-2 kubenswrapper[4776]: I1011 10:54:20.075446 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-kube-api-access-ndm6h" (OuterVolumeSpecName: "kube-api-access-ndm6h") pod "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" (UID: "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb"). InnerVolumeSpecName "kube-api-access-ndm6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:20.101808 master-2 kubenswrapper[4776]: I1011 10:54:20.101709 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-config" (OuterVolumeSpecName: "config") pod "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" (UID: "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:20.102210 master-2 kubenswrapper[4776]: I1011 10:54:20.102160 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" (UID: "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:20.105555 master-2 kubenswrapper[4776]: I1011 10:54:20.105518 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" (UID: "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:20.108082 master-2 kubenswrapper[4776]: I1011 10:54:20.108027 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" (UID: "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:20.108628 master-2 kubenswrapper[4776]: I1011 10:54:20.108595 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" (UID: "8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:20.144185 master-2 kubenswrapper[4776]: I1011 10:54:20.144017 4776 generic.go:334] "Generic (PLEG): container finished" podID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerID="04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced" exitCode=0 Oct 11 10:54:20.144185 master-2 kubenswrapper[4776]: I1011 10:54:20.144118 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" event={"ID":"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb","Type":"ContainerDied","Data":"04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced"} Oct 11 10:54:20.144526 master-2 kubenswrapper[4776]: I1011 10:54:20.144238 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" Oct 11 10:54:20.144526 master-2 kubenswrapper[4776]: I1011 10:54:20.144328 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6bd7965-l58cd" event={"ID":"8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb","Type":"ContainerDied","Data":"2ec4ba703e7155177399cd4035e4698136a2123a7fc01c195d9d7ff62132b49e"} Oct 11 10:54:20.144526 master-2 kubenswrapper[4776]: I1011 10:54:20.144353 4776 scope.go:117] "RemoveContainer" containerID="04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced" Oct 11 10:54:20.147122 master-2 kubenswrapper[4776]: I1011 10:54:20.147076 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" event={"ID":"7da38dfb-a995-4843-a05a-351e5dc557ae","Type":"ContainerStarted","Data":"bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7"} Oct 11 10:54:20.147185 master-2 kubenswrapper[4776]: I1011 10:54:20.147154 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:20.165302 master-2 kubenswrapper[4776]: I1011 10:54:20.165253 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:20.165302 master-2 kubenswrapper[4776]: I1011 10:54:20.165290 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:20.165302 master-2 kubenswrapper[4776]: I1011 10:54:20.165299 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:20.165302 master-2 kubenswrapper[4776]: I1011 10:54:20.165311 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndm6h\" (UniqueName: \"kubernetes.io/projected/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-kube-api-access-ndm6h\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:20.165635 master-2 kubenswrapper[4776]: I1011 10:54:20.165325 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:20.165635 master-2 kubenswrapper[4776]: I1011 10:54:20.165339 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:20.186783 master-2 kubenswrapper[4776]: I1011 10:54:20.186596 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" podStartSLOduration=3.186579808 podStartE2EDuration="3.186579808s" podCreationTimestamp="2025-10-11 10:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:20.181792468 +0000 UTC m=+1694.966219187" watchObservedRunningTime="2025-10-11 10:54:20.186579808 +0000 UTC m=+1694.971006517" Oct 11 10:54:20.214969 master-2 kubenswrapper[4776]: I1011 10:54:20.213509 4776 scope.go:117] "RemoveContainer" containerID="10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: I1011 10:54:20.236045 4776 scope.go:117] "RemoveContainer" containerID="04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: E1011 10:54:20.237003 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced\": container with ID starting with 04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced not found: ID does not exist" containerID="04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: I1011 10:54:20.237030 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced"} err="failed to get container status \"04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced\": rpc error: code = NotFound desc = could not find container \"04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced\": container with ID starting with 04091f4e95d2b6d6c43ee51287f9ed383b416c049a15225700362c6e81d7bced not found: ID does not exist" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: I1011 10:54:20.237057 4776 scope.go:117] "RemoveContainer" containerID="10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: E1011 10:54:20.237330 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b\": container with ID starting with 10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b not found: ID does not exist" containerID="10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: I1011 10:54:20.237368 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b"} err="failed to get container status \"10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b\": rpc error: code = NotFound desc = could not find container \"10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b\": container with ID starting with 10af27c6317efb8808c68e476f79cb52aac82a9f0fad82472d4b628cd51a5c1b not found: ID does not exist" Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: I1011 10:54:20.239048 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8c6bd7965-l58cd"] Oct 11 10:54:20.251238 master-2 kubenswrapper[4776]: I1011 10:54:20.245882 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8c6bd7965-l58cd"] Oct 11 10:54:20.961016 master-2 kubenswrapper[4776]: I1011 10:54:20.960824 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:21.084313 master-2 kubenswrapper[4776]: I1011 10:54:21.084229 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-config-data\") pod \"e873bed5-1a50-4fb0-81b1-2225f4893b28\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " Oct 11 10:54:21.087088 master-2 kubenswrapper[4776]: I1011 10:54:21.087049 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-combined-ca-bundle\") pod \"e873bed5-1a50-4fb0-81b1-2225f4893b28\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " Oct 11 10:54:21.087451 master-2 kubenswrapper[4776]: I1011 10:54:21.087422 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nqtk\" (UniqueName: \"kubernetes.io/projected/e873bed5-1a50-4fb0-81b1-2225f4893b28-kube-api-access-4nqtk\") pod \"e873bed5-1a50-4fb0-81b1-2225f4893b28\" (UID: \"e873bed5-1a50-4fb0-81b1-2225f4893b28\") " Oct 11 10:54:21.090025 master-2 kubenswrapper[4776]: I1011 10:54:21.089981 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e873bed5-1a50-4fb0-81b1-2225f4893b28-kube-api-access-4nqtk" (OuterVolumeSpecName: "kube-api-access-4nqtk") pod "e873bed5-1a50-4fb0-81b1-2225f4893b28" (UID: "e873bed5-1a50-4fb0-81b1-2225f4893b28"). InnerVolumeSpecName "kube-api-access-4nqtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:21.108732 master-2 kubenswrapper[4776]: I1011 10:54:21.108503 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e873bed5-1a50-4fb0-81b1-2225f4893b28" (UID: "e873bed5-1a50-4fb0-81b1-2225f4893b28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:21.129705 master-2 kubenswrapper[4776]: I1011 10:54:21.129636 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-config-data" (OuterVolumeSpecName: "config-data") pod "e873bed5-1a50-4fb0-81b1-2225f4893b28" (UID: "e873bed5-1a50-4fb0-81b1-2225f4893b28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:21.156309 master-2 kubenswrapper[4776]: I1011 10:54:21.156260 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8dnfj" Oct 11 10:54:21.157017 master-2 kubenswrapper[4776]: I1011 10:54:21.156974 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8dnfj" event={"ID":"e873bed5-1a50-4fb0-81b1-2225f4893b28","Type":"ContainerDied","Data":"0b06641be085fcf01d7e46fe302bfe6e5b8dfbdf3e87a305d5995d2fa7ab62cb"} Oct 11 10:54:21.157017 master-2 kubenswrapper[4776]: I1011 10:54:21.157014 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b06641be085fcf01d7e46fe302bfe6e5b8dfbdf3e87a305d5995d2fa7ab62cb" Oct 11 10:54:21.192878 master-2 kubenswrapper[4776]: I1011 10:54:21.192826 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:21.192878 master-2 kubenswrapper[4776]: I1011 10:54:21.192881 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nqtk\" (UniqueName: \"kubernetes.io/projected/e873bed5-1a50-4fb0-81b1-2225f4893b28-kube-api-access-4nqtk\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:21.193224 master-2 kubenswrapper[4776]: I1011 10:54:21.192895 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e873bed5-1a50-4fb0-81b1-2225f4893b28-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:21.855042 master-2 kubenswrapper[4776]: I1011 10:54:21.854907 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb5fd84f-98bcs"] Oct 11 10:54:21.922814 master-2 kubenswrapper[4776]: I1011 10:54:21.922730 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jdggk"] Oct 11 10:54:21.923051 master-2 kubenswrapper[4776]: E1011 10:54:21.923024 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e873bed5-1a50-4fb0-81b1-2225f4893b28" containerName="keystone-db-sync" Oct 11 10:54:21.923051 master-2 kubenswrapper[4776]: I1011 10:54:21.923040 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e873bed5-1a50-4fb0-81b1-2225f4893b28" containerName="keystone-db-sync" Oct 11 10:54:21.923153 master-2 kubenswrapper[4776]: E1011 10:54:21.923055 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerName="dnsmasq-dns" Oct 11 10:54:21.923153 master-2 kubenswrapper[4776]: I1011 10:54:21.923062 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerName="dnsmasq-dns" Oct 11 10:54:21.923153 master-2 kubenswrapper[4776]: E1011 10:54:21.923074 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerName="init" Oct 11 10:54:21.923153 master-2 kubenswrapper[4776]: I1011 10:54:21.923079 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerName="init" Oct 11 10:54:21.923323 master-2 kubenswrapper[4776]: I1011 10:54:21.923254 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" containerName="dnsmasq-dns" Oct 11 10:54:21.923323 master-2 kubenswrapper[4776]: I1011 10:54:21.923266 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e873bed5-1a50-4fb0-81b1-2225f4893b28" containerName="keystone-db-sync" Oct 11 10:54:21.923876 master-2 kubenswrapper[4776]: I1011 10:54:21.923848 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:21.927147 master-2 kubenswrapper[4776]: I1011 10:54:21.927118 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 11 10:54:21.927317 master-2 kubenswrapper[4776]: I1011 10:54:21.927299 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 11 10:54:21.927954 master-2 kubenswrapper[4776]: I1011 10:54:21.927927 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 11 10:54:21.991142 master-2 kubenswrapper[4776]: I1011 10:54:21.991086 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59bd749d9f-z7w7d"] Oct 11 10:54:21.992590 master-2 kubenswrapper[4776]: I1011 10:54:21.992560 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.006634 master-2 kubenswrapper[4776]: I1011 10:54:22.006580 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jdggk"] Oct 11 10:54:22.007227 master-2 kubenswrapper[4776]: I1011 10:54:22.007190 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-config-data\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.007294 master-2 kubenswrapper[4776]: I1011 10:54:22.007240 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.007294 master-2 kubenswrapper[4776]: I1011 10:54:22.007263 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-combined-ca-bundle\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.007381 master-2 kubenswrapper[4776]: I1011 10:54:22.007296 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-scripts\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.007470 master-2 kubenswrapper[4776]: I1011 10:54:22.007427 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhkv4\" (UniqueName: \"kubernetes.io/projected/316381d6-4304-44b3-a742-50e80da7acd1-kube-api-access-xhkv4\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.007521 master-2 kubenswrapper[4776]: I1011 10:54:22.007477 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.007521 master-2 kubenswrapper[4776]: I1011 10:54:22.007513 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-svc\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.007597 master-2 kubenswrapper[4776]: I1011 10:54:22.007569 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-credential-keys\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.007660 master-2 kubenswrapper[4776]: I1011 10:54:22.007638 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shtfx\" (UniqueName: \"kubernetes.io/projected/e8ac662a-494b-422a-84fd-2e40681d4ae6-kube-api-access-shtfx\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.007770 master-2 kubenswrapper[4776]: I1011 10:54:22.007748 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.007826 master-2 kubenswrapper[4776]: I1011 10:54:22.007804 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-fernet-keys\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.007921 master-2 kubenswrapper[4776]: I1011 10:54:22.007903 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-config\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.017035 master-2 kubenswrapper[4776]: I1011 10:54:22.016977 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59bd749d9f-z7w7d"] Oct 11 10:54:22.067978 master-2 kubenswrapper[4776]: I1011 10:54:22.067904 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb" path="/var/lib/kubelet/pods/8ebe7164-eaa7-44d9-ad5f-b9e98dd364fb/volumes" Oct 11 10:54:22.109610 master-2 kubenswrapper[4776]: I1011 10:54:22.109484 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-config\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.109610 master-2 kubenswrapper[4776]: I1011 10:54:22.109552 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-config-data\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.109610 master-2 kubenswrapper[4776]: I1011 10:54:22.109581 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.109610 master-2 kubenswrapper[4776]: I1011 10:54:22.109603 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-combined-ca-bundle\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109625 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-scripts\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109655 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109686 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhkv4\" (UniqueName: \"kubernetes.io/projected/316381d6-4304-44b3-a742-50e80da7acd1-kube-api-access-xhkv4\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109701 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-svc\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109722 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-credential-keys\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109747 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shtfx\" (UniqueName: \"kubernetes.io/projected/e8ac662a-494b-422a-84fd-2e40681d4ae6-kube-api-access-shtfx\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109775 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.110189 master-2 kubenswrapper[4776]: I1011 10:54:22.109798 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-fernet-keys\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.110472 master-2 kubenswrapper[4776]: I1011 10:54:22.110379 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-config\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.111036 master-2 kubenswrapper[4776]: I1011 10:54:22.110973 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-svc\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.111767 master-2 kubenswrapper[4776]: I1011 10:54:22.111740 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.112220 master-2 kubenswrapper[4776]: I1011 10:54:22.112143 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.113220 master-2 kubenswrapper[4776]: I1011 10:54:22.113182 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-scripts\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.113347 master-2 kubenswrapper[4776]: I1011 10:54:22.113309 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.113517 master-2 kubenswrapper[4776]: I1011 10:54:22.113486 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-fernet-keys\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.113648 master-2 kubenswrapper[4776]: I1011 10:54:22.113614 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-credential-keys\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.114068 master-2 kubenswrapper[4776]: I1011 10:54:22.114017 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-config-data\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.114721 master-2 kubenswrapper[4776]: I1011 10:54:22.114690 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-combined-ca-bundle\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.169847 master-2 kubenswrapper[4776]: I1011 10:54:22.169753 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerName="dnsmasq-dns" containerID="cri-o://bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7" gracePeriod=10 Oct 11 10:54:22.236532 master-2 kubenswrapper[4776]: I1011 10:54:22.236389 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhkv4\" (UniqueName: \"kubernetes.io/projected/316381d6-4304-44b3-a742-50e80da7acd1-kube-api-access-xhkv4\") pod \"keystone-bootstrap-jdggk\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.240594 master-2 kubenswrapper[4776]: I1011 10:54:22.237273 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:22.240594 master-2 kubenswrapper[4776]: I1011 10:54:22.237978 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shtfx\" (UniqueName: \"kubernetes.io/projected/e8ac662a-494b-422a-84fd-2e40681d4ae6-kube-api-access-shtfx\") pod \"dnsmasq-dns-59bd749d9f-z7w7d\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.316921 master-2 kubenswrapper[4776]: I1011 10:54:22.316227 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:22.559699 master-2 kubenswrapper[4776]: I1011 10:54:22.557372 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59bd749d9f-z7w7d"] Oct 11 10:54:22.588714 master-2 kubenswrapper[4776]: I1011 10:54:22.580872 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-5pz76"] Oct 11 10:54:22.588714 master-2 kubenswrapper[4776]: I1011 10:54:22.582135 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.588714 master-2 kubenswrapper[4776]: I1011 10:54:22.585649 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 11 10:54:22.588714 master-2 kubenswrapper[4776]: I1011 10:54:22.585785 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 11 10:54:22.602807 master-2 kubenswrapper[4776]: I1011 10:54:22.602761 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5pz76"] Oct 11 10:54:22.634203 master-2 kubenswrapper[4776]: I1011 10:54:22.634079 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-scripts\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.634452 master-2 kubenswrapper[4776]: I1011 10:54:22.634433 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-combined-ca-bundle\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.634569 master-2 kubenswrapper[4776]: I1011 10:54:22.634553 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-config-data\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.634736 master-2 kubenswrapper[4776]: I1011 10:54:22.634718 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cef6f34-fa11-4593-b4d8-c9ac415f1967-logs\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.634856 master-2 kubenswrapper[4776]: I1011 10:54:22.634839 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkk8v\" (UniqueName: \"kubernetes.io/projected/7cef6f34-fa11-4593-b4d8-c9ac415f1967-kube-api-access-wkk8v\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.737054 master-2 kubenswrapper[4776]: I1011 10:54:22.736983 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cef6f34-fa11-4593-b4d8-c9ac415f1967-logs\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.737054 master-2 kubenswrapper[4776]: I1011 10:54:22.737033 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkk8v\" (UniqueName: \"kubernetes.io/projected/7cef6f34-fa11-4593-b4d8-c9ac415f1967-kube-api-access-wkk8v\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.740323 master-2 kubenswrapper[4776]: I1011 10:54:22.737147 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-scripts\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.740323 master-2 kubenswrapper[4776]: I1011 10:54:22.737172 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-combined-ca-bundle\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.740323 master-2 kubenswrapper[4776]: I1011 10:54:22.737197 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-config-data\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.740323 master-2 kubenswrapper[4776]: I1011 10:54:22.738601 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cef6f34-fa11-4593-b4d8-c9ac415f1967-logs\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.741357 master-2 kubenswrapper[4776]: I1011 10:54:22.741308 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-scripts\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.741988 master-2 kubenswrapper[4776]: I1011 10:54:22.741584 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-config-data\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.745359 master-2 kubenswrapper[4776]: I1011 10:54:22.745337 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-combined-ca-bundle\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.766136 master-2 kubenswrapper[4776]: I1011 10:54:22.766102 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkk8v\" (UniqueName: \"kubernetes.io/projected/7cef6f34-fa11-4593-b4d8-c9ac415f1967-kube-api-access-wkk8v\") pod \"placement-db-sync-5pz76\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.907248 master-2 kubenswrapper[4776]: I1011 10:54:22.907196 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:22.934473 master-2 kubenswrapper[4776]: I1011 10:54:22.934431 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jdggk"] Oct 11 10:54:23.101867 master-2 kubenswrapper[4776]: I1011 10:54:23.101572 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:23.145258 master-2 kubenswrapper[4776]: I1011 10:54:23.142441 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-svc\") pod \"7da38dfb-a995-4843-a05a-351e5dc557ae\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " Oct 11 10:54:23.145258 master-2 kubenswrapper[4776]: I1011 10:54:23.142514 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-swift-storage-0\") pod \"7da38dfb-a995-4843-a05a-351e5dc557ae\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " Oct 11 10:54:23.145258 master-2 kubenswrapper[4776]: I1011 10:54:23.142543 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-nb\") pod \"7da38dfb-a995-4843-a05a-351e5dc557ae\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " Oct 11 10:54:23.145258 master-2 kubenswrapper[4776]: I1011 10:54:23.142695 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8llfr\" (UniqueName: \"kubernetes.io/projected/7da38dfb-a995-4843-a05a-351e5dc557ae-kube-api-access-8llfr\") pod \"7da38dfb-a995-4843-a05a-351e5dc557ae\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " Oct 11 10:54:23.145258 master-2 kubenswrapper[4776]: I1011 10:54:23.142741 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-sb\") pod \"7da38dfb-a995-4843-a05a-351e5dc557ae\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " Oct 11 10:54:23.145258 master-2 kubenswrapper[4776]: I1011 10:54:23.142771 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-config\") pod \"7da38dfb-a995-4843-a05a-351e5dc557ae\" (UID: \"7da38dfb-a995-4843-a05a-351e5dc557ae\") " Oct 11 10:54:23.162110 master-2 kubenswrapper[4776]: I1011 10:54:23.152049 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da38dfb-a995-4843-a05a-351e5dc557ae-kube-api-access-8llfr" (OuterVolumeSpecName: "kube-api-access-8llfr") pod "7da38dfb-a995-4843-a05a-351e5dc557ae" (UID: "7da38dfb-a995-4843-a05a-351e5dc557ae"). InnerVolumeSpecName "kube-api-access-8llfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:23.193769 master-2 kubenswrapper[4776]: I1011 10:54:23.184018 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7da38dfb-a995-4843-a05a-351e5dc557ae" (UID: "7da38dfb-a995-4843-a05a-351e5dc557ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:23.193769 master-2 kubenswrapper[4776]: I1011 10:54:23.188016 4776 generic.go:334] "Generic (PLEG): container finished" podID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerID="bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7" exitCode=0 Oct 11 10:54:23.193769 master-2 kubenswrapper[4776]: I1011 10:54:23.188143 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" event={"ID":"7da38dfb-a995-4843-a05a-351e5dc557ae","Type":"ContainerDied","Data":"bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7"} Oct 11 10:54:23.193769 master-2 kubenswrapper[4776]: I1011 10:54:23.188180 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" event={"ID":"7da38dfb-a995-4843-a05a-351e5dc557ae","Type":"ContainerDied","Data":"585cb1a3dd3493a9c6dcca4a20afacedee65336427561c983a60785158d1da86"} Oct 11 10:54:23.193769 master-2 kubenswrapper[4776]: I1011 10:54:23.188200 4776 scope.go:117] "RemoveContainer" containerID="bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7" Oct 11 10:54:23.193769 master-2 kubenswrapper[4776]: I1011 10:54:23.188378 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb5fd84f-98bcs" Oct 11 10:54:23.197487 master-2 kubenswrapper[4776]: I1011 10:54:23.196373 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59bd749d9f-z7w7d"] Oct 11 10:54:23.199320 master-2 kubenswrapper[4776]: I1011 10:54:23.199272 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jdggk" event={"ID":"316381d6-4304-44b3-a742-50e80da7acd1","Type":"ContainerStarted","Data":"1ef6065ef3373ebd1d031e48bef6566526ba170bc1411a15757ea04bbf481260"} Oct 11 10:54:23.199379 master-2 kubenswrapper[4776]: I1011 10:54:23.199323 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jdggk" event={"ID":"316381d6-4304-44b3-a742-50e80da7acd1","Type":"ContainerStarted","Data":"475d348503e4fe6007aebb3a7d38a0a9425dc9455cf353a62f41318f4614bcf0"} Oct 11 10:54:23.213095 master-2 kubenswrapper[4776]: I1011 10:54:23.213027 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7da38dfb-a995-4843-a05a-351e5dc557ae" (UID: "7da38dfb-a995-4843-a05a-351e5dc557ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:23.214837 master-2 kubenswrapper[4776]: I1011 10:54:23.214796 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-config" (OuterVolumeSpecName: "config") pod "7da38dfb-a995-4843-a05a-351e5dc557ae" (UID: "7da38dfb-a995-4843-a05a-351e5dc557ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:23.223286 master-2 kubenswrapper[4776]: I1011 10:54:23.223009 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7da38dfb-a995-4843-a05a-351e5dc557ae" (UID: "7da38dfb-a995-4843-a05a-351e5dc557ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:23.237150 master-2 kubenswrapper[4776]: I1011 10:54:23.237061 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7da38dfb-a995-4843-a05a-351e5dc557ae" (UID: "7da38dfb-a995-4843-a05a-351e5dc557ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:23.246076 master-2 kubenswrapper[4776]: I1011 10:54:23.246041 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8llfr\" (UniqueName: \"kubernetes.io/projected/7da38dfb-a995-4843-a05a-351e5dc557ae-kube-api-access-8llfr\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:23.246076 master-2 kubenswrapper[4776]: I1011 10:54:23.246074 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:23.246076 master-2 kubenswrapper[4776]: I1011 10:54:23.246085 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:23.246322 master-2 kubenswrapper[4776]: I1011 10:54:23.246095 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:23.246322 master-2 kubenswrapper[4776]: I1011 10:54:23.246104 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:23.246322 master-2 kubenswrapper[4776]: I1011 10:54:23.246113 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da38dfb-a995-4843-a05a-351e5dc557ae-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:23.267882 master-2 kubenswrapper[4776]: I1011 10:54:23.267806 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jdggk" podStartSLOduration=2.267785561 podStartE2EDuration="2.267785561s" podCreationTimestamp="2025-10-11 10:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:23.266109975 +0000 UTC m=+1698.050536694" watchObservedRunningTime="2025-10-11 10:54:23.267785561 +0000 UTC m=+1698.052212270" Oct 11 10:54:23.352500 master-2 kubenswrapper[4776]: I1011 10:54:23.352444 4776 scope.go:117] "RemoveContainer" containerID="456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe" Oct 11 10:54:23.390663 master-2 kubenswrapper[4776]: I1011 10:54:23.390621 4776 scope.go:117] "RemoveContainer" containerID="bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7" Oct 11 10:54:23.390962 master-2 kubenswrapper[4776]: E1011 10:54:23.390928 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7\": container with ID starting with bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7 not found: ID does not exist" containerID="bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7" Oct 11 10:54:23.391035 master-2 kubenswrapper[4776]: I1011 10:54:23.390978 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7"} err="failed to get container status \"bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7\": rpc error: code = NotFound desc = could not find container \"bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7\": container with ID starting with bd7bf36076930ed876dd2f6d4763bb0a139052c23b26ad79e36097d125bb2fc7 not found: ID does not exist" Oct 11 10:54:23.391035 master-2 kubenswrapper[4776]: I1011 10:54:23.391001 4776 scope.go:117] "RemoveContainer" containerID="456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe" Oct 11 10:54:23.391250 master-2 kubenswrapper[4776]: E1011 10:54:23.391218 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe\": container with ID starting with 456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe not found: ID does not exist" containerID="456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe" Oct 11 10:54:23.391250 master-2 kubenswrapper[4776]: I1011 10:54:23.391238 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe"} err="failed to get container status \"456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe\": rpc error: code = NotFound desc = could not find container \"456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe\": container with ID starting with 456a11f9f99ee2dc27577d1a40bfabc1a966dba6a7ff7a1deaefb75df56315fe not found: ID does not exist" Oct 11 10:54:23.454050 master-2 kubenswrapper[4776]: I1011 10:54:23.451189 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5pz76"] Oct 11 10:54:23.618436 master-2 kubenswrapper[4776]: I1011 10:54:23.617369 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb5fd84f-98bcs"] Oct 11 10:54:23.626136 master-2 kubenswrapper[4776]: I1011 10:54:23.626091 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bb5fd84f-98bcs"] Oct 11 10:54:24.073494 master-2 kubenswrapper[4776]: I1011 10:54:24.072453 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" path="/var/lib/kubelet/pods/7da38dfb-a995-4843-a05a-351e5dc557ae/volumes" Oct 11 10:54:24.210489 master-2 kubenswrapper[4776]: I1011 10:54:24.210452 4776 generic.go:334] "Generic (PLEG): container finished" podID="e8ac662a-494b-422a-84fd-2e40681d4ae6" containerID="f192722015b7d9504bd8c55034d7826d00d5fb1a1a0d4260c835d69125d1e33c" exitCode=0 Oct 11 10:54:24.211610 master-2 kubenswrapper[4776]: I1011 10:54:24.210553 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" event={"ID":"e8ac662a-494b-422a-84fd-2e40681d4ae6","Type":"ContainerDied","Data":"f192722015b7d9504bd8c55034d7826d00d5fb1a1a0d4260c835d69125d1e33c"} Oct 11 10:54:24.211660 master-2 kubenswrapper[4776]: I1011 10:54:24.211627 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" event={"ID":"e8ac662a-494b-422a-84fd-2e40681d4ae6","Type":"ContainerStarted","Data":"9932a4fcd14aa933b55724fba2f6169f3c81fe174fbb3b8a282c132a474c40e2"} Oct 11 10:54:24.217574 master-2 kubenswrapper[4776]: I1011 10:54:24.216996 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5pz76" event={"ID":"7cef6f34-fa11-4593-b4d8-c9ac415f1967","Type":"ContainerStarted","Data":"e1f01b2335980f0a5eb4c3856d30b5245099ce20b959f182008d4ec5131f98bf"} Oct 11 10:54:24.327989 master-2 kubenswrapper[4776]: I1011 10:54:24.321992 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:54:24.330819 master-2 kubenswrapper[4776]: E1011 10:54:24.330788 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerName="dnsmasq-dns" Oct 11 10:54:24.330898 master-2 kubenswrapper[4776]: I1011 10:54:24.330837 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerName="dnsmasq-dns" Oct 11 10:54:24.330898 master-2 kubenswrapper[4776]: E1011 10:54:24.330855 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerName="init" Oct 11 10:54:24.330898 master-2 kubenswrapper[4776]: I1011 10:54:24.330861 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerName="init" Oct 11 10:54:24.331882 master-2 kubenswrapper[4776]: I1011 10:54:24.331851 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da38dfb-a995-4843-a05a-351e5dc557ae" containerName="dnsmasq-dns" Oct 11 10:54:24.333344 master-2 kubenswrapper[4776]: I1011 10:54:24.332832 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.336197 master-2 kubenswrapper[4776]: I1011 10:54:24.335795 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 11 10:54:24.336197 master-2 kubenswrapper[4776]: I1011 10:54:24.335979 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-internal-config-data" Oct 11 10:54:24.336197 master-2 kubenswrapper[4776]: I1011 10:54:24.336109 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 11 10:54:24.337564 master-2 kubenswrapper[4776]: I1011 10:54:24.337545 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:54:24.466839 master-2 kubenswrapper[4776]: I1011 10:54:24.466768 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.466839 master-2 kubenswrapper[4776]: I1011 10:54:24.466835 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.467208 master-2 kubenswrapper[4776]: I1011 10:54:24.466864 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.467208 master-2 kubenswrapper[4776]: I1011 10:54:24.466913 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.467208 master-2 kubenswrapper[4776]: I1011 10:54:24.466984 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.467208 master-2 kubenswrapper[4776]: I1011 10:54:24.467031 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.467208 master-2 kubenswrapper[4776]: I1011 10:54:24.467149 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.467208 master-2 kubenswrapper[4776]: I1011 10:54:24.467190 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbzvc\" (UniqueName: \"kubernetes.io/projected/d0cc3965-fd34-4442-b408-a5ae441443e4-kube-api-access-jbzvc\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.569528 master-2 kubenswrapper[4776]: I1011 10:54:24.569486 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbzvc\" (UniqueName: \"kubernetes.io/projected/d0cc3965-fd34-4442-b408-a5ae441443e4-kube-api-access-jbzvc\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570001 master-2 kubenswrapper[4776]: I1011 10:54:24.569978 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570140 master-2 kubenswrapper[4776]: I1011 10:54:24.570124 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570281 master-2 kubenswrapper[4776]: I1011 10:54:24.570265 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570447 master-2 kubenswrapper[4776]: I1011 10:54:24.570428 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570562 master-2 kubenswrapper[4776]: I1011 10:54:24.570546 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570696 master-2 kubenswrapper[4776]: I1011 10:54:24.570662 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.570886 master-2 kubenswrapper[4776]: I1011 10:54:24.570839 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.571056 master-2 kubenswrapper[4776]: I1011 10:54:24.570840 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.572067 master-2 kubenswrapper[4776]: I1011 10:54:24.572036 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.602504 master-2 kubenswrapper[4776]: I1011 10:54:24.602440 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.602802 master-2 kubenswrapper[4776]: I1011 10:54:24.602546 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.602921 master-2 kubenswrapper[4776]: I1011 10:54:24.602833 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.608724 master-2 kubenswrapper[4776]: I1011 10:54:24.604071 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.608724 master-2 kubenswrapper[4776]: I1011 10:54:24.607211 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbzvc\" (UniqueName: \"kubernetes.io/projected/d0cc3965-fd34-4442-b408-a5ae441443e4-kube-api-access-jbzvc\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:24.610073 master-2 kubenswrapper[4776]: I1011 10:54:24.610055 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:54:24.610232 master-2 kubenswrapper[4776]: I1011 10:54:24.610211 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c5f302d2b867cc737f2daf9c42090b10daaee38f14f31a51f3dbff0cf77a4fd1/globalmount\"" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:25.142963 master-2 kubenswrapper[4776]: I1011 10:54:25.140339 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:25.229090 master-2 kubenswrapper[4776]: I1011 10:54:25.229036 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" event={"ID":"e8ac662a-494b-422a-84fd-2e40681d4ae6","Type":"ContainerDied","Data":"9932a4fcd14aa933b55724fba2f6169f3c81fe174fbb3b8a282c132a474c40e2"} Oct 11 10:54:25.229490 master-2 kubenswrapper[4776]: I1011 10:54:25.229101 4776 scope.go:117] "RemoveContainer" containerID="f192722015b7d9504bd8c55034d7826d00d5fb1a1a0d4260c835d69125d1e33c" Oct 11 10:54:25.229490 master-2 kubenswrapper[4776]: I1011 10:54:25.229147 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59bd749d9f-z7w7d" Oct 11 10:54:25.293428 master-2 kubenswrapper[4776]: I1011 10:54:25.293377 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-swift-storage-0\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.293639 master-2 kubenswrapper[4776]: I1011 10:54:25.293466 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.293639 master-2 kubenswrapper[4776]: I1011 10:54:25.293498 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-nb\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.293639 master-2 kubenswrapper[4776]: I1011 10:54:25.293597 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shtfx\" (UniqueName: \"kubernetes.io/projected/e8ac662a-494b-422a-84fd-2e40681d4ae6-kube-api-access-shtfx\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.293791 master-2 kubenswrapper[4776]: I1011 10:54:25.293724 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-config\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.293791 master-2 kubenswrapper[4776]: I1011 10:54:25.293779 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-svc\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.297430 master-2 kubenswrapper[4776]: I1011 10:54:25.297382 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ac662a-494b-422a-84fd-2e40681d4ae6-kube-api-access-shtfx" (OuterVolumeSpecName: "kube-api-access-shtfx") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6"). InnerVolumeSpecName "kube-api-access-shtfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:25.313944 master-2 kubenswrapper[4776]: I1011 10:54:25.313849 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:25.316440 master-2 kubenswrapper[4776]: I1011 10:54:25.315376 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:25.319955 master-2 kubenswrapper[4776]: I1011 10:54:25.319883 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-config" (OuterVolumeSpecName: "config") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:25.327033 master-2 kubenswrapper[4776]: E1011 10:54:25.326931 4776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb podName:e8ac662a-494b-422a-84fd-2e40681d4ae6 nodeName:}" failed. No retries permitted until 2025-10-11 10:54:25.826897741 +0000 UTC m=+1700.611324450 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-sb" (UniqueName: "kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6") : error deleting /var/lib/kubelet/pods/e8ac662a-494b-422a-84fd-2e40681d4ae6/volume-subpaths: remove /var/lib/kubelet/pods/e8ac662a-494b-422a-84fd-2e40681d4ae6/volume-subpaths: no such file or directory Oct 11 10:54:25.327412 master-2 kubenswrapper[4776]: I1011 10:54:25.327373 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:25.396476 master-2 kubenswrapper[4776]: I1011 10:54:25.396423 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:25.396476 master-2 kubenswrapper[4776]: I1011 10:54:25.396463 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:25.396476 master-2 kubenswrapper[4776]: I1011 10:54:25.396476 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shtfx\" (UniqueName: \"kubernetes.io/projected/e8ac662a-494b-422a-84fd-2e40681d4ae6-kube-api-access-shtfx\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:25.396476 master-2 kubenswrapper[4776]: I1011 10:54:25.396488 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:25.396914 master-2 kubenswrapper[4776]: I1011 10:54:25.396502 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:25.919283 master-2 kubenswrapper[4776]: I1011 10:54:25.919191 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb\") pod \"e8ac662a-494b-422a-84fd-2e40681d4ae6\" (UID: \"e8ac662a-494b-422a-84fd-2e40681d4ae6\") " Oct 11 10:54:25.919876 master-2 kubenswrapper[4776]: I1011 10:54:25.919687 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e8ac662a-494b-422a-84fd-2e40681d4ae6" (UID: "e8ac662a-494b-422a-84fd-2e40681d4ae6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:54:25.920068 master-2 kubenswrapper[4776]: I1011 10:54:25.920007 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ac662a-494b-422a-84fd-2e40681d4ae6-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:26.178843 master-2 kubenswrapper[4776]: I1011 10:54:26.178708 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:26.273013 master-2 kubenswrapper[4776]: I1011 10:54:26.272957 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59bd749d9f-z7w7d"] Oct 11 10:54:26.277521 master-2 kubenswrapper[4776]: I1011 10:54:26.277470 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59bd749d9f-z7w7d"] Oct 11 10:54:26.456775 master-2 kubenswrapper[4776]: I1011 10:54:26.453321 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:27.279772 master-2 kubenswrapper[4776]: I1011 10:54:27.279720 4776 generic.go:334] "Generic (PLEG): container finished" podID="316381d6-4304-44b3-a742-50e80da7acd1" containerID="1ef6065ef3373ebd1d031e48bef6566526ba170bc1411a15757ea04bbf481260" exitCode=0 Oct 11 10:54:27.281105 master-2 kubenswrapper[4776]: I1011 10:54:27.279776 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jdggk" event={"ID":"316381d6-4304-44b3-a742-50e80da7acd1","Type":"ContainerDied","Data":"1ef6065ef3373ebd1d031e48bef6566526ba170bc1411a15757ea04bbf481260"} Oct 11 10:54:27.796610 master-2 kubenswrapper[4776]: I1011 10:54:27.796456 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:54:27.796891 master-2 kubenswrapper[4776]: E1011 10:54:27.796867 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ac662a-494b-422a-84fd-2e40681d4ae6" containerName="init" Oct 11 10:54:27.796891 master-2 kubenswrapper[4776]: I1011 10:54:27.796885 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ac662a-494b-422a-84fd-2e40681d4ae6" containerName="init" Oct 11 10:54:27.797113 master-2 kubenswrapper[4776]: I1011 10:54:27.797065 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ac662a-494b-422a-84fd-2e40681d4ae6" containerName="init" Oct 11 10:54:27.798124 master-2 kubenswrapper[4776]: I1011 10:54:27.798090 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.801641 master-2 kubenswrapper[4776]: I1011 10:54:27.801597 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-external-config-data" Oct 11 10:54:27.801977 master-2 kubenswrapper[4776]: I1011 10:54:27.801938 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 11 10:54:27.813414 master-2 kubenswrapper[4776]: I1011 10:54:27.813363 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:54:27.970948 master-2 kubenswrapper[4776]: I1011 10:54:27.970899 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-public-tls-certs\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971156 master-2 kubenswrapper[4776]: I1011 10:54:27.971114 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-config-data\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971215 master-2 kubenswrapper[4776]: I1011 10:54:27.971172 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f984s\" (UniqueName: \"kubernetes.io/projected/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-kube-api-access-f984s\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971262 master-2 kubenswrapper[4776]: I1011 10:54:27.971239 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-logs\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971320 master-2 kubenswrapper[4776]: I1011 10:54:27.971290 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-httpd-run\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971433 master-2 kubenswrapper[4776]: I1011 10:54:27.971402 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971490 master-2 kubenswrapper[4776]: I1011 10:54:27.971446 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-scripts\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:27.971540 master-2 kubenswrapper[4776]: I1011 10:54:27.971494 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-combined-ca-bundle\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.073968 master-2 kubenswrapper[4776]: I1011 10:54:28.073846 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-public-tls-certs\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.073968 master-2 kubenswrapper[4776]: I1011 10:54:28.073961 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-config-data\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.074228 master-2 kubenswrapper[4776]: I1011 10:54:28.073997 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f984s\" (UniqueName: \"kubernetes.io/projected/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-kube-api-access-f984s\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.074228 master-2 kubenswrapper[4776]: I1011 10:54:28.074043 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-logs\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.074228 master-2 kubenswrapper[4776]: I1011 10:54:28.074074 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-httpd-run\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.074228 master-2 kubenswrapper[4776]: I1011 10:54:28.074117 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.074228 master-2 kubenswrapper[4776]: I1011 10:54:28.074146 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-scripts\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.074228 master-2 kubenswrapper[4776]: I1011 10:54:28.074174 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-combined-ca-bundle\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.075341 master-2 kubenswrapper[4776]: I1011 10:54:28.075307 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-httpd-run\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.076767 master-2 kubenswrapper[4776]: I1011 10:54:28.076694 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-logs\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.078317 master-2 kubenswrapper[4776]: I1011 10:54:28.078039 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-config-data\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.078317 master-2 kubenswrapper[4776]: I1011 10:54:28.078131 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:54:28.078317 master-2 kubenswrapper[4776]: I1011 10:54:28.078159 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/977628254c2695ff17425dccc1fbe376fb7c4f4d8dfcfd87eb3a48ca9779afa1/globalmount\"" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.078567 master-2 kubenswrapper[4776]: I1011 10:54:28.078432 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-combined-ca-bundle\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.079029 master-2 kubenswrapper[4776]: I1011 10:54:28.078916 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-scripts\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.080331 master-2 kubenswrapper[4776]: I1011 10:54:28.080283 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-public-tls-certs\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.081254 master-2 kubenswrapper[4776]: I1011 10:54:28.081212 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ac662a-494b-422a-84fd-2e40681d4ae6" path="/var/lib/kubelet/pods/e8ac662a-494b-422a-84fd-2e40681d4ae6/volumes" Oct 11 10:54:28.105713 master-2 kubenswrapper[4776]: I1011 10:54:28.105647 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f984s\" (UniqueName: \"kubernetes.io/projected/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-kube-api-access-f984s\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:28.341660 master-2 kubenswrapper[4776]: I1011 10:54:28.341593 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5pz76" event={"ID":"7cef6f34-fa11-4593-b4d8-c9ac415f1967","Type":"ContainerStarted","Data":"5a93f178b03e320516bcd32c99e245334eeef09b36cb1fbff0b5d12e1d56145d"} Oct 11 10:54:28.397879 master-2 kubenswrapper[4776]: I1011 10:54:28.397598 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:54:28.421844 master-2 kubenswrapper[4776]: I1011 10:54:28.421768 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-5pz76" podStartSLOduration=2.020701722 podStartE2EDuration="6.421743903s" podCreationTimestamp="2025-10-11 10:54:22 +0000 UTC" firstStartedPulling="2025-10-11 10:54:23.462597707 +0000 UTC m=+1698.247024416" lastFinishedPulling="2025-10-11 10:54:27.863639878 +0000 UTC m=+1702.648066597" observedRunningTime="2025-10-11 10:54:28.405848212 +0000 UTC m=+1703.190274941" watchObservedRunningTime="2025-10-11 10:54:28.421743903 +0000 UTC m=+1703.206170612" Oct 11 10:54:29.195839 master-2 kubenswrapper[4776]: I1011 10:54:29.194909 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:29.302070 master-2 kubenswrapper[4776]: I1011 10:54:29.302021 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-combined-ca-bundle\") pod \"316381d6-4304-44b3-a742-50e80da7acd1\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " Oct 11 10:54:29.302215 master-2 kubenswrapper[4776]: I1011 10:54:29.302088 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhkv4\" (UniqueName: \"kubernetes.io/projected/316381d6-4304-44b3-a742-50e80da7acd1-kube-api-access-xhkv4\") pod \"316381d6-4304-44b3-a742-50e80da7acd1\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " Oct 11 10:54:29.302215 master-2 kubenswrapper[4776]: I1011 10:54:29.302109 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-credential-keys\") pod \"316381d6-4304-44b3-a742-50e80da7acd1\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " Oct 11 10:54:29.302215 master-2 kubenswrapper[4776]: I1011 10:54:29.302214 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-scripts\") pod \"316381d6-4304-44b3-a742-50e80da7acd1\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " Oct 11 10:54:29.302338 master-2 kubenswrapper[4776]: I1011 10:54:29.302236 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-fernet-keys\") pod \"316381d6-4304-44b3-a742-50e80da7acd1\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " Oct 11 10:54:29.302338 master-2 kubenswrapper[4776]: I1011 10:54:29.302311 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-config-data\") pod \"316381d6-4304-44b3-a742-50e80da7acd1\" (UID: \"316381d6-4304-44b3-a742-50e80da7acd1\") " Oct 11 10:54:29.306050 master-2 kubenswrapper[4776]: I1011 10:54:29.305970 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-scripts" (OuterVolumeSpecName: "scripts") pod "316381d6-4304-44b3-a742-50e80da7acd1" (UID: "316381d6-4304-44b3-a742-50e80da7acd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:29.306662 master-2 kubenswrapper[4776]: I1011 10:54:29.306581 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316381d6-4304-44b3-a742-50e80da7acd1-kube-api-access-xhkv4" (OuterVolumeSpecName: "kube-api-access-xhkv4") pod "316381d6-4304-44b3-a742-50e80da7acd1" (UID: "316381d6-4304-44b3-a742-50e80da7acd1"). InnerVolumeSpecName "kube-api-access-xhkv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:29.307081 master-2 kubenswrapper[4776]: I1011 10:54:29.307025 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "316381d6-4304-44b3-a742-50e80da7acd1" (UID: "316381d6-4304-44b3-a742-50e80da7acd1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:29.309264 master-2 kubenswrapper[4776]: I1011 10:54:29.309079 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "316381d6-4304-44b3-a742-50e80da7acd1" (UID: "316381d6-4304-44b3-a742-50e80da7acd1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:29.335895 master-2 kubenswrapper[4776]: I1011 10:54:29.335847 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "316381d6-4304-44b3-a742-50e80da7acd1" (UID: "316381d6-4304-44b3-a742-50e80da7acd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:29.368010 master-2 kubenswrapper[4776]: I1011 10:54:29.367949 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-config-data" (OuterVolumeSpecName: "config-data") pod "316381d6-4304-44b3-a742-50e80da7acd1" (UID: "316381d6-4304-44b3-a742-50e80da7acd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:29.370902 master-2 kubenswrapper[4776]: I1011 10:54:29.370856 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jdggk" event={"ID":"316381d6-4304-44b3-a742-50e80da7acd1","Type":"ContainerDied","Data":"475d348503e4fe6007aebb3a7d38a0a9425dc9455cf353a62f41318f4614bcf0"} Oct 11 10:54:29.370993 master-2 kubenswrapper[4776]: I1011 10:54:29.370893 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jdggk" Oct 11 10:54:29.370993 master-2 kubenswrapper[4776]: I1011 10:54:29.370914 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="475d348503e4fe6007aebb3a7d38a0a9425dc9455cf353a62f41318f4614bcf0" Oct 11 10:54:29.376853 master-2 kubenswrapper[4776]: I1011 10:54:29.376799 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"d0cc3965-fd34-4442-b408-a5ae441443e4","Type":"ContainerStarted","Data":"c0a0f68535f1045164a03ee0e1499295237f65e65bc92ffb7cde06bc73007d4d"} Oct 11 10:54:29.376853 master-2 kubenswrapper[4776]: I1011 10:54:29.376855 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"d0cc3965-fd34-4442-b408-a5ae441443e4","Type":"ContainerStarted","Data":"020cb51e8f192e46e701d1c522ecf5cc9d035525d4d7b945c86775cc56da8867"} Oct 11 10:54:29.404805 master-2 kubenswrapper[4776]: I1011 10:54:29.404753 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:29.404805 master-2 kubenswrapper[4776]: I1011 10:54:29.404788 4776 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-credential-keys\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:29.404805 master-2 kubenswrapper[4776]: I1011 10:54:29.404799 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhkv4\" (UniqueName: \"kubernetes.io/projected/316381d6-4304-44b3-a742-50e80da7acd1-kube-api-access-xhkv4\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:29.404805 master-2 kubenswrapper[4776]: I1011 10:54:29.404808 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:29.404805 master-2 kubenswrapper[4776]: I1011 10:54:29.404817 4776 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-fernet-keys\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:29.405197 master-2 kubenswrapper[4776]: I1011 10:54:29.404827 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316381d6-4304-44b3-a742-50e80da7acd1-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:29.451220 master-2 kubenswrapper[4776]: I1011 10:54:29.451161 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jdggk"] Oct 11 10:54:29.459388 master-2 kubenswrapper[4776]: I1011 10:54:29.459272 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jdggk"] Oct 11 10:54:29.476391 master-2 kubenswrapper[4776]: I1011 10:54:29.476287 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xptqx"] Oct 11 10:54:29.476788 master-2 kubenswrapper[4776]: E1011 10:54:29.476671 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316381d6-4304-44b3-a742-50e80da7acd1" containerName="keystone-bootstrap" Oct 11 10:54:29.476788 master-2 kubenswrapper[4776]: I1011 10:54:29.476705 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="316381d6-4304-44b3-a742-50e80da7acd1" containerName="keystone-bootstrap" Oct 11 10:54:29.477245 master-2 kubenswrapper[4776]: I1011 10:54:29.476899 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="316381d6-4304-44b3-a742-50e80da7acd1" containerName="keystone-bootstrap" Oct 11 10:54:29.479044 master-2 kubenswrapper[4776]: I1011 10:54:29.478996 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.482588 master-2 kubenswrapper[4776]: I1011 10:54:29.482507 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 11 10:54:29.483425 master-2 kubenswrapper[4776]: I1011 10:54:29.483400 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 11 10:54:29.483586 master-2 kubenswrapper[4776]: I1011 10:54:29.483564 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 11 10:54:29.491463 master-2 kubenswrapper[4776]: I1011 10:54:29.491403 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xptqx"] Oct 11 10:54:29.609511 master-2 kubenswrapper[4776]: I1011 10:54:29.607352 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-credential-keys\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.609511 master-2 kubenswrapper[4776]: I1011 10:54:29.607432 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-fernet-keys\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.609511 master-2 kubenswrapper[4776]: I1011 10:54:29.607503 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-scripts\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.609511 master-2 kubenswrapper[4776]: I1011 10:54:29.607539 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb4d9\" (UniqueName: \"kubernetes.io/projected/30996a86-1b86-4a67-bfea-0e63f7417196-kube-api-access-kb4d9\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.609511 master-2 kubenswrapper[4776]: I1011 10:54:29.607565 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-config-data\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.609511 master-2 kubenswrapper[4776]: I1011 10:54:29.608779 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-combined-ca-bundle\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.675590 master-2 kubenswrapper[4776]: I1011 10:54:29.674998 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:29.709959 master-2 kubenswrapper[4776]: I1011 10:54:29.709902 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-credential-keys\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.709959 master-2 kubenswrapper[4776]: I1011 10:54:29.709962 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-fernet-keys\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.710179 master-2 kubenswrapper[4776]: I1011 10:54:29.710004 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-scripts\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.710179 master-2 kubenswrapper[4776]: I1011 10:54:29.710036 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb4d9\" (UniqueName: \"kubernetes.io/projected/30996a86-1b86-4a67-bfea-0e63f7417196-kube-api-access-kb4d9\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.710179 master-2 kubenswrapper[4776]: I1011 10:54:29.710061 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-config-data\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.710279 master-2 kubenswrapper[4776]: I1011 10:54:29.710193 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-combined-ca-bundle\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.713834 master-2 kubenswrapper[4776]: I1011 10:54:29.713768 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-combined-ca-bundle\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.714015 master-2 kubenswrapper[4776]: I1011 10:54:29.713984 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-credential-keys\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.715585 master-2 kubenswrapper[4776]: I1011 10:54:29.715550 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-scripts\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.715869 master-2 kubenswrapper[4776]: I1011 10:54:29.715781 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-fernet-keys\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.717794 master-2 kubenswrapper[4776]: I1011 10:54:29.717754 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-config-data\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.743464 master-2 kubenswrapper[4776]: I1011 10:54:29.743423 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb4d9\" (UniqueName: \"kubernetes.io/projected/30996a86-1b86-4a67-bfea-0e63f7417196-kube-api-access-kb4d9\") pod \"keystone-bootstrap-xptqx\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.802120 master-2 kubenswrapper[4776]: I1011 10:54:29.802062 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:29.918150 master-2 kubenswrapper[4776]: I1011 10:54:29.917770 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:30.077768 master-2 kubenswrapper[4776]: I1011 10:54:30.077715 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316381d6-4304-44b3-a742-50e80da7acd1" path="/var/lib/kubelet/pods/316381d6-4304-44b3-a742-50e80da7acd1/volumes" Oct 11 10:54:30.392942 master-2 kubenswrapper[4776]: I1011 10:54:30.392889 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"d0cc3965-fd34-4442-b408-a5ae441443e4","Type":"ContainerStarted","Data":"30c682488fe3a4cbd0fa8fdf8a635610245b1344157964f363074a15ace75969"} Oct 11 10:54:30.429362 master-2 kubenswrapper[4776]: I1011 10:54:30.429278 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-internal-api-0" podStartSLOduration=8.429258955 podStartE2EDuration="8.429258955s" podCreationTimestamp="2025-10-11 10:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:30.423229252 +0000 UTC m=+1705.207655971" watchObservedRunningTime="2025-10-11 10:54:30.429258955 +0000 UTC m=+1705.213685664" Oct 11 10:54:30.451660 master-2 kubenswrapper[4776]: W1011 10:54:30.451615 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30996a86_1b86_4a67_bfea_0e63f7417196.slice/crio-740014b42f7f5352557d95dd0518b5f24506765ebf058a5db025a483fda2e186 WatchSource:0}: Error finding container 740014b42f7f5352557d95dd0518b5f24506765ebf058a5db025a483fda2e186: Status 404 returned error can't find the container with id 740014b42f7f5352557d95dd0518b5f24506765ebf058a5db025a483fda2e186 Oct 11 10:54:30.455457 master-2 kubenswrapper[4776]: I1011 10:54:30.455395 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xptqx"] Oct 11 10:54:30.672722 master-2 kubenswrapper[4776]: I1011 10:54:30.671789 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:54:31.409412 master-2 kubenswrapper[4776]: I1011 10:54:31.409326 4776 generic.go:334] "Generic (PLEG): container finished" podID="7cef6f34-fa11-4593-b4d8-c9ac415f1967" containerID="5a93f178b03e320516bcd32c99e245334eeef09b36cb1fbff0b5d12e1d56145d" exitCode=0 Oct 11 10:54:31.409981 master-2 kubenswrapper[4776]: I1011 10:54:31.409454 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5pz76" event={"ID":"7cef6f34-fa11-4593-b4d8-c9ac415f1967","Type":"ContainerDied","Data":"5a93f178b03e320516bcd32c99e245334eeef09b36cb1fbff0b5d12e1d56145d"} Oct 11 10:54:31.413389 master-2 kubenswrapper[4776]: I1011 10:54:31.413323 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"a0831eef-0c2e-4d09-a44b-7276f30bc1cf","Type":"ContainerStarted","Data":"2d269532330f7a891baad11ad939ef35677256aa7bb563b56736833615edcf28"} Oct 11 10:54:31.413456 master-2 kubenswrapper[4776]: I1011 10:54:31.413391 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"a0831eef-0c2e-4d09-a44b-7276f30bc1cf","Type":"ContainerStarted","Data":"5a48d5bbfd49d56d4f32777007bb97dc3fb7108ea65533008682389d26fd8acc"} Oct 11 10:54:31.415555 master-2 kubenswrapper[4776]: I1011 10:54:31.415497 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xptqx" event={"ID":"30996a86-1b86-4a67-bfea-0e63f7417196","Type":"ContainerStarted","Data":"7a37bc55a741b7925fe73a8333e051e4eed1c5b9263c43dfa0598438fa7d12fc"} Oct 11 10:54:31.415652 master-2 kubenswrapper[4776]: I1011 10:54:31.415574 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xptqx" event={"ID":"30996a86-1b86-4a67-bfea-0e63f7417196","Type":"ContainerStarted","Data":"740014b42f7f5352557d95dd0518b5f24506765ebf058a5db025a483fda2e186"} Oct 11 10:54:31.463879 master-2 kubenswrapper[4776]: I1011 10:54:31.463791 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xptqx" podStartSLOduration=2.4637330840000002 podStartE2EDuration="2.463733084s" podCreationTimestamp="2025-10-11 10:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:31.455797149 +0000 UTC m=+1706.240223878" watchObservedRunningTime="2025-10-11 10:54:31.463733084 +0000 UTC m=+1706.248159803" Oct 11 10:54:32.425829 master-2 kubenswrapper[4776]: I1011 10:54:32.425785 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"a0831eef-0c2e-4d09-a44b-7276f30bc1cf","Type":"ContainerStarted","Data":"5770704138c0c2f874908ca2cc1d2acaea03506574846327d58cb886820df9bf"} Oct 11 10:54:32.459077 master-2 kubenswrapper[4776]: I1011 10:54:32.458990 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-external-api-2" podStartSLOduration=7.458968619 podStartE2EDuration="7.458968619s" podCreationTimestamp="2025-10-11 10:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:32.457874749 +0000 UTC m=+1707.242301468" watchObservedRunningTime="2025-10-11 10:54:32.458968619 +0000 UTC m=+1707.243395348" Oct 11 10:54:33.108960 master-2 kubenswrapper[4776]: I1011 10:54:33.108894 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-595686b98f-4tcwj"] Oct 11 10:54:33.115694 master-2 kubenswrapper[4776]: I1011 10:54:33.115623 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.116608 master-2 kubenswrapper[4776]: I1011 10:54:33.116557 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-4tcwj"] Oct 11 10:54:33.121757 master-2 kubenswrapper[4776]: I1011 10:54:33.120151 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:54:33.121757 master-2 kubenswrapper[4776]: I1011 10:54:33.120352 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 10:54:33.121757 master-2 kubenswrapper[4776]: I1011 10:54:33.120479 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:54:33.121757 master-2 kubenswrapper[4776]: I1011 10:54:33.120584 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 10:54:33.121757 master-2 kubenswrapper[4776]: I1011 10:54:33.120743 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:54:33.278569 master-2 kubenswrapper[4776]: I1011 10:54:33.278506 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-svc\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.278724 master-2 kubenswrapper[4776]: I1011 10:54:33.278630 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-swift-storage-0\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.278855 master-2 kubenswrapper[4776]: I1011 10:54:33.278811 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znxxj\" (UniqueName: \"kubernetes.io/projected/70447ad9-31f0-4f6a-8c40-19fbe8141ada-kube-api-access-znxxj\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.278900 master-2 kubenswrapper[4776]: I1011 10:54:33.278881 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-nb\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.278946 master-2 kubenswrapper[4776]: I1011 10:54:33.278912 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-config\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.279047 master-2 kubenswrapper[4776]: I1011 10:54:33.279015 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-sb\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.341380 master-2 kubenswrapper[4776]: I1011 10:54:33.341325 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:33.386966 master-2 kubenswrapper[4776]: I1011 10:54:33.386850 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-svc\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.386966 master-2 kubenswrapper[4776]: I1011 10:54:33.386929 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-swift-storage-0\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.387181 master-2 kubenswrapper[4776]: I1011 10:54:33.387014 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znxxj\" (UniqueName: \"kubernetes.io/projected/70447ad9-31f0-4f6a-8c40-19fbe8141ada-kube-api-access-znxxj\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.387181 master-2 kubenswrapper[4776]: I1011 10:54:33.387050 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-config\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.387181 master-2 kubenswrapper[4776]: I1011 10:54:33.387073 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-nb\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.387181 master-2 kubenswrapper[4776]: I1011 10:54:33.387120 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-sb\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.389023 master-2 kubenswrapper[4776]: I1011 10:54:33.387901 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-svc\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.389023 master-2 kubenswrapper[4776]: I1011 10:54:33.388333 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-sb\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.389023 master-2 kubenswrapper[4776]: I1011 10:54:33.388790 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-swift-storage-0\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.391615 master-2 kubenswrapper[4776]: I1011 10:54:33.389238 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-config\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.391615 master-2 kubenswrapper[4776]: I1011 10:54:33.389464 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-nb\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.423962 master-2 kubenswrapper[4776]: I1011 10:54:33.423472 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znxxj\" (UniqueName: \"kubernetes.io/projected/70447ad9-31f0-4f6a-8c40-19fbe8141ada-kube-api-access-znxxj\") pod \"dnsmasq-dns-595686b98f-4tcwj\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.433689 master-2 kubenswrapper[4776]: I1011 10:54:33.433630 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5pz76" event={"ID":"7cef6f34-fa11-4593-b4d8-c9ac415f1967","Type":"ContainerDied","Data":"e1f01b2335980f0a5eb4c3856d30b5245099ce20b959f182008d4ec5131f98bf"} Oct 11 10:54:33.434116 master-2 kubenswrapper[4776]: I1011 10:54:33.433720 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1f01b2335980f0a5eb4c3856d30b5245099ce20b959f182008d4ec5131f98bf" Oct 11 10:54:33.434116 master-2 kubenswrapper[4776]: I1011 10:54:33.433638 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5pz76" Oct 11 10:54:33.442144 master-2 kubenswrapper[4776]: I1011 10:54:33.442104 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:33.490509 master-2 kubenswrapper[4776]: I1011 10:54:33.490417 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cef6f34-fa11-4593-b4d8-c9ac415f1967-logs\") pod \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " Oct 11 10:54:33.490870 master-2 kubenswrapper[4776]: I1011 10:54:33.490593 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-combined-ca-bundle\") pod \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " Oct 11 10:54:33.490946 master-2 kubenswrapper[4776]: I1011 10:54:33.490928 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cef6f34-fa11-4593-b4d8-c9ac415f1967-logs" (OuterVolumeSpecName: "logs") pod "7cef6f34-fa11-4593-b4d8-c9ac415f1967" (UID: "7cef6f34-fa11-4593-b4d8-c9ac415f1967"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:54:33.492417 master-2 kubenswrapper[4776]: I1011 10:54:33.492239 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-scripts\") pod \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " Oct 11 10:54:33.492588 master-2 kubenswrapper[4776]: I1011 10:54:33.492572 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkk8v\" (UniqueName: \"kubernetes.io/projected/7cef6f34-fa11-4593-b4d8-c9ac415f1967-kube-api-access-wkk8v\") pod \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " Oct 11 10:54:33.492787 master-2 kubenswrapper[4776]: I1011 10:54:33.492764 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-config-data\") pod \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\" (UID: \"7cef6f34-fa11-4593-b4d8-c9ac415f1967\") " Oct 11 10:54:33.493627 master-2 kubenswrapper[4776]: I1011 10:54:33.493608 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cef6f34-fa11-4593-b4d8-c9ac415f1967-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:33.495586 master-2 kubenswrapper[4776]: I1011 10:54:33.495550 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-scripts" (OuterVolumeSpecName: "scripts") pod "7cef6f34-fa11-4593-b4d8-c9ac415f1967" (UID: "7cef6f34-fa11-4593-b4d8-c9ac415f1967"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:33.495699 master-2 kubenswrapper[4776]: I1011 10:54:33.495654 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cef6f34-fa11-4593-b4d8-c9ac415f1967-kube-api-access-wkk8v" (OuterVolumeSpecName: "kube-api-access-wkk8v") pod "7cef6f34-fa11-4593-b4d8-c9ac415f1967" (UID: "7cef6f34-fa11-4593-b4d8-c9ac415f1967"). InnerVolumeSpecName "kube-api-access-wkk8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:33.526051 master-2 kubenswrapper[4776]: I1011 10:54:33.524931 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cef6f34-fa11-4593-b4d8-c9ac415f1967" (UID: "7cef6f34-fa11-4593-b4d8-c9ac415f1967"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:33.527316 master-2 kubenswrapper[4776]: I1011 10:54:33.527223 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-config-data" (OuterVolumeSpecName: "config-data") pod "7cef6f34-fa11-4593-b4d8-c9ac415f1967" (UID: "7cef6f34-fa11-4593-b4d8-c9ac415f1967"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:33.574070 master-2 kubenswrapper[4776]: I1011 10:54:33.573717 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b597cbbf8-8sdfz"] Oct 11 10:54:33.574070 master-2 kubenswrapper[4776]: E1011 10:54:33.574028 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cef6f34-fa11-4593-b4d8-c9ac415f1967" containerName="placement-db-sync" Oct 11 10:54:33.574070 master-2 kubenswrapper[4776]: I1011 10:54:33.574040 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cef6f34-fa11-4593-b4d8-c9ac415f1967" containerName="placement-db-sync" Oct 11 10:54:33.577662 master-2 kubenswrapper[4776]: I1011 10:54:33.577610 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cef6f34-fa11-4593-b4d8-c9ac415f1967" containerName="placement-db-sync" Oct 11 10:54:33.578623 master-2 kubenswrapper[4776]: I1011 10:54:33.578607 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.582716 master-2 kubenswrapper[4776]: I1011 10:54:33.582130 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 11 10:54:33.582716 master-2 kubenswrapper[4776]: I1011 10:54:33.582542 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 11 10:54:33.602124 master-2 kubenswrapper[4776]: I1011 10:54:33.598702 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkk8v\" (UniqueName: \"kubernetes.io/projected/7cef6f34-fa11-4593-b4d8-c9ac415f1967-kube-api-access-wkk8v\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:33.602124 master-2 kubenswrapper[4776]: I1011 10:54:33.598815 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:33.602124 master-2 kubenswrapper[4776]: I1011 10:54:33.598858 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:33.602124 master-2 kubenswrapper[4776]: I1011 10:54:33.598873 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cef6f34-fa11-4593-b4d8-c9ac415f1967-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:33.602124 master-2 kubenswrapper[4776]: I1011 10:54:33.599302 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b597cbbf8-8sdfz"] Oct 11 10:54:33.700697 master-2 kubenswrapper[4776]: I1011 10:54:33.700609 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-config-data\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.700697 master-2 kubenswrapper[4776]: I1011 10:54:33.700693 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-logs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.700989 master-2 kubenswrapper[4776]: I1011 10:54:33.700735 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-scripts\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.700989 master-2 kubenswrapper[4776]: I1011 10:54:33.700769 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-combined-ca-bundle\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.700989 master-2 kubenswrapper[4776]: I1011 10:54:33.700785 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-public-tls-certs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.700989 master-2 kubenswrapper[4776]: I1011 10:54:33.700813 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7szw\" (UniqueName: \"kubernetes.io/projected/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-kube-api-access-h7szw\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.700989 master-2 kubenswrapper[4776]: I1011 10:54:33.700846 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-internal-tls-certs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.802490 master-2 kubenswrapper[4776]: I1011 10:54:33.802441 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-internal-tls-certs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.803354 master-2 kubenswrapper[4776]: I1011 10:54:33.803338 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-config-data\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.803482 master-2 kubenswrapper[4776]: I1011 10:54:33.803468 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-logs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.803606 master-2 kubenswrapper[4776]: I1011 10:54:33.803594 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-scripts\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.803710 master-2 kubenswrapper[4776]: I1011 10:54:33.803697 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-public-tls-certs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.803802 master-2 kubenswrapper[4776]: I1011 10:54:33.803790 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-combined-ca-bundle\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.803894 master-2 kubenswrapper[4776]: I1011 10:54:33.803882 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7szw\" (UniqueName: \"kubernetes.io/projected/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-kube-api-access-h7szw\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.805427 master-2 kubenswrapper[4776]: I1011 10:54:33.805382 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-logs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.807735 master-2 kubenswrapper[4776]: I1011 10:54:33.807715 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-internal-tls-certs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.807859 master-2 kubenswrapper[4776]: I1011 10:54:33.807790 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-public-tls-certs\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.807936 master-2 kubenswrapper[4776]: I1011 10:54:33.807883 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-config-data\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.808208 master-2 kubenswrapper[4776]: I1011 10:54:33.808192 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-scripts\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.808325 master-2 kubenswrapper[4776]: I1011 10:54:33.808299 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-combined-ca-bundle\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.825355 master-2 kubenswrapper[4776]: I1011 10:54:33.825320 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7szw\" (UniqueName: \"kubernetes.io/projected/e2cc5d2d-290c-487f-a0f9-3095b17e1fcb-kube-api-access-h7szw\") pod \"placement-6b597cbbf8-8sdfz\" (UID: \"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb\") " pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.937475 master-2 kubenswrapper[4776]: I1011 10:54:33.937406 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-4tcwj"] Oct 11 10:54:33.944584 master-2 kubenswrapper[4776]: W1011 10:54:33.943939 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70447ad9_31f0_4f6a_8c40_19fbe8141ada.slice/crio-deced8507734b7d702b6a986cf9629954a10ef889b4d591c0c86c8d9b9826ad7 WatchSource:0}: Error finding container deced8507734b7d702b6a986cf9629954a10ef889b4d591c0c86c8d9b9826ad7: Status 404 returned error can't find the container with id deced8507734b7d702b6a986cf9629954a10ef889b4d591c0c86c8d9b9826ad7 Oct 11 10:54:33.944584 master-2 kubenswrapper[4776]: I1011 10:54:33.943982 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-nz82h"] Oct 11 10:54:33.944584 master-2 kubenswrapper[4776]: I1011 10:54:33.944359 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:33.945480 master-2 kubenswrapper[4776]: I1011 10:54:33.945443 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:33.948930 master-2 kubenswrapper[4776]: I1011 10:54:33.948861 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 11 10:54:33.959404 master-2 kubenswrapper[4776]: I1011 10:54:33.956741 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-nz82h"] Oct 11 10:54:34.008807 master-2 kubenswrapper[4776]: I1011 10:54:34.008540 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-config-data\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.008807 master-2 kubenswrapper[4776]: I1011 10:54:34.008617 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf8n6\" (UniqueName: \"kubernetes.io/projected/005f2579-b848-40fd-b3f3-2d3383344047-kube-api-access-hf8n6\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.008944 master-2 kubenswrapper[4776]: I1011 10:54:34.008814 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-combined-ca-bundle\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.111739 master-2 kubenswrapper[4776]: I1011 10:54:34.111668 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-combined-ca-bundle\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.111838 master-2 kubenswrapper[4776]: I1011 10:54:34.111812 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-config-data\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.111887 master-2 kubenswrapper[4776]: I1011 10:54:34.111855 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf8n6\" (UniqueName: \"kubernetes.io/projected/005f2579-b848-40fd-b3f3-2d3383344047-kube-api-access-hf8n6\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.115769 master-2 kubenswrapper[4776]: I1011 10:54:34.115724 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-combined-ca-bundle\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.116068 master-2 kubenswrapper[4776]: I1011 10:54:34.116033 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-config-data\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.134631 master-2 kubenswrapper[4776]: I1011 10:54:34.134598 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf8n6\" (UniqueName: \"kubernetes.io/projected/005f2579-b848-40fd-b3f3-2d3383344047-kube-api-access-hf8n6\") pod \"heat-db-sync-nz82h\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.237906 master-2 kubenswrapper[4776]: I1011 10:54:34.237764 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-db-sync-4sh7r"] Oct 11 10:54:34.239070 master-2 kubenswrapper[4776]: I1011 10:54:34.239033 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.242024 master-2 kubenswrapper[4776]: I1011 10:54:34.241989 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-scripts" Oct 11 10:54:34.242264 master-2 kubenswrapper[4776]: I1011 10:54:34.242217 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-config-data" Oct 11 10:54:34.254310 master-2 kubenswrapper[4776]: I1011 10:54:34.254258 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-db-sync-4sh7r"] Oct 11 10:54:34.269928 master-2 kubenswrapper[4776]: I1011 10:54:34.269872 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz82h" Oct 11 10:54:34.314482 master-2 kubenswrapper[4776]: I1011 10:54:34.314428 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-scripts\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.314656 master-2 kubenswrapper[4776]: I1011 10:54:34.314510 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-config-data\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.314974 master-2 kubenswrapper[4776]: I1011 10:54:34.314936 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-db-sync-config-data\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.315041 master-2 kubenswrapper[4776]: I1011 10:54:34.315020 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-etc-machine-id\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.315151 master-2 kubenswrapper[4776]: I1011 10:54:34.315131 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkn85\" (UniqueName: \"kubernetes.io/projected/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-kube-api-access-dkn85\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.315264 master-2 kubenswrapper[4776]: I1011 10:54:34.315244 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-combined-ca-bundle\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.416883 master-2 kubenswrapper[4776]: I1011 10:54:34.416571 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-db-sync-config-data\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.416883 master-2 kubenswrapper[4776]: I1011 10:54:34.416637 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-etc-machine-id\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.416883 master-2 kubenswrapper[4776]: I1011 10:54:34.416701 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkn85\" (UniqueName: \"kubernetes.io/projected/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-kube-api-access-dkn85\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.416883 master-2 kubenswrapper[4776]: I1011 10:54:34.416748 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-combined-ca-bundle\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.416883 master-2 kubenswrapper[4776]: I1011 10:54:34.416779 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-scripts\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.416883 master-2 kubenswrapper[4776]: I1011 10:54:34.416801 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-etc-machine-id\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.418614 master-2 kubenswrapper[4776]: I1011 10:54:34.416812 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-config-data\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.420309 master-2 kubenswrapper[4776]: I1011 10:54:34.420264 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-db-sync-config-data\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.421522 master-2 kubenswrapper[4776]: I1011 10:54:34.421483 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-config-data\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.421740 master-2 kubenswrapper[4776]: I1011 10:54:34.421222 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-combined-ca-bundle\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.422965 master-2 kubenswrapper[4776]: I1011 10:54:34.422919 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-scripts\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.440867 master-2 kubenswrapper[4776]: I1011 10:54:34.440818 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkn85\" (UniqueName: \"kubernetes.io/projected/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-kube-api-access-dkn85\") pod \"cinder-b5802-db-sync-4sh7r\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.448770 master-2 kubenswrapper[4776]: I1011 10:54:34.448712 4776 generic.go:334] "Generic (PLEG): container finished" podID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerID="304ada663003ba027290aa5f510ba1f9e62024cd530b437aab6c3371a94b50d9" exitCode=0 Oct 11 10:54:34.448862 master-2 kubenswrapper[4776]: I1011 10:54:34.448789 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" event={"ID":"70447ad9-31f0-4f6a-8c40-19fbe8141ada","Type":"ContainerDied","Data":"304ada663003ba027290aa5f510ba1f9e62024cd530b437aab6c3371a94b50d9"} Oct 11 10:54:34.448900 master-2 kubenswrapper[4776]: I1011 10:54:34.448879 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" event={"ID":"70447ad9-31f0-4f6a-8c40-19fbe8141ada","Type":"ContainerStarted","Data":"deced8507734b7d702b6a986cf9629954a10ef889b4d591c0c86c8d9b9826ad7"} Oct 11 10:54:34.454624 master-2 kubenswrapper[4776]: I1011 10:54:34.454581 4776 generic.go:334] "Generic (PLEG): container finished" podID="30996a86-1b86-4a67-bfea-0e63f7417196" containerID="7a37bc55a741b7925fe73a8333e051e4eed1c5b9263c43dfa0598438fa7d12fc" exitCode=0 Oct 11 10:54:34.454725 master-2 kubenswrapper[4776]: I1011 10:54:34.454644 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xptqx" event={"ID":"30996a86-1b86-4a67-bfea-0e63f7417196","Type":"ContainerDied","Data":"7a37bc55a741b7925fe73a8333e051e4eed1c5b9263c43dfa0598438fa7d12fc"} Oct 11 10:54:34.525153 master-2 kubenswrapper[4776]: I1011 10:54:34.525023 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-2dgxj"] Oct 11 10:54:34.526360 master-2 kubenswrapper[4776]: I1011 10:54:34.526323 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.529457 master-2 kubenswrapper[4776]: I1011 10:54:34.529351 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 11 10:54:34.530653 master-2 kubenswrapper[4776]: I1011 10:54:34.529564 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 11 10:54:34.538813 master-2 kubenswrapper[4776]: I1011 10:54:34.538778 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2dgxj"] Oct 11 10:54:34.557578 master-2 kubenswrapper[4776]: I1011 10:54:34.557515 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:54:34.581783 master-2 kubenswrapper[4776]: I1011 10:54:34.581744 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b597cbbf8-8sdfz"] Oct 11 10:54:34.587109 master-2 kubenswrapper[4776]: W1011 10:54:34.587058 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2cc5d2d_290c_487f_a0f9_3095b17e1fcb.slice/crio-ccb08834d598ceb26cdcac6fff1ec4cb2c1cf59d30c091f534685ff710dd9097 WatchSource:0}: Error finding container ccb08834d598ceb26cdcac6fff1ec4cb2c1cf59d30c091f534685ff710dd9097: Status 404 returned error can't find the container with id ccb08834d598ceb26cdcac6fff1ec4cb2c1cf59d30c091f534685ff710dd9097 Oct 11 10:54:34.621776 master-2 kubenswrapper[4776]: I1011 10:54:34.621667 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmbf9\" (UniqueName: \"kubernetes.io/projected/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-kube-api-access-rmbf9\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.622165 master-2 kubenswrapper[4776]: I1011 10:54:34.621820 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-combined-ca-bundle\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.622165 master-2 kubenswrapper[4776]: I1011 10:54:34.621856 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-config\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.692409 master-2 kubenswrapper[4776]: I1011 10:54:34.692352 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-nz82h"] Oct 11 10:54:34.698535 master-2 kubenswrapper[4776]: W1011 10:54:34.698208 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod005f2579_b848_40fd_b3f3_2d3383344047.slice/crio-cc5b9c7250419dc166026542bf46853e7a0777efbedca2dc833bde7e55dd6960 WatchSource:0}: Error finding container cc5b9c7250419dc166026542bf46853e7a0777efbedca2dc833bde7e55dd6960: Status 404 returned error can't find the container with id cc5b9c7250419dc166026542bf46853e7a0777efbedca2dc833bde7e55dd6960 Oct 11 10:54:34.724922 master-2 kubenswrapper[4776]: I1011 10:54:34.724857 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmbf9\" (UniqueName: \"kubernetes.io/projected/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-kube-api-access-rmbf9\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.725142 master-2 kubenswrapper[4776]: I1011 10:54:34.724961 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-combined-ca-bundle\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.725142 master-2 kubenswrapper[4776]: I1011 10:54:34.724993 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-config\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.732297 master-2 kubenswrapper[4776]: I1011 10:54:34.732172 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-combined-ca-bundle\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.732909 master-2 kubenswrapper[4776]: I1011 10:54:34.732878 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-config\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.749478 master-2 kubenswrapper[4776]: I1011 10:54:34.749428 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmbf9\" (UniqueName: \"kubernetes.io/projected/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-kube-api-access-rmbf9\") pod \"neutron-db-sync-2dgxj\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:34.881499 master-2 kubenswrapper[4776]: I1011 10:54:34.881447 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:54:35.034425 master-2 kubenswrapper[4776]: W1011 10:54:35.033352 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4f2a1bf_160f_40ad_bc2c_a7286a90b988.slice/crio-fa7c55aca3df89af2240ffd38909152d52165a2a24214ef5226c24aacb7b9aca WatchSource:0}: Error finding container fa7c55aca3df89af2240ffd38909152d52165a2a24214ef5226c24aacb7b9aca: Status 404 returned error can't find the container with id fa7c55aca3df89af2240ffd38909152d52165a2a24214ef5226c24aacb7b9aca Oct 11 10:54:35.051879 master-2 kubenswrapper[4776]: I1011 10:54:35.051835 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-db-sync-4sh7r"] Oct 11 10:54:35.331941 master-2 kubenswrapper[4776]: I1011 10:54:35.331550 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2dgxj"] Oct 11 10:54:35.332123 master-2 kubenswrapper[4776]: W1011 10:54:35.331991 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd90a5c6e_6cd2_4396_b38c_dc0e03da9d38.slice/crio-f641f5141915e0e5383d1c25132b34d7f73e6431d60ee61f32ba0f43bc3081a9 WatchSource:0}: Error finding container f641f5141915e0e5383d1c25132b34d7f73e6431d60ee61f32ba0f43bc3081a9: Status 404 returned error can't find the container with id f641f5141915e0e5383d1c25132b34d7f73e6431d60ee61f32ba0f43bc3081a9 Oct 11 10:54:35.467931 master-2 kubenswrapper[4776]: I1011 10:54:35.467880 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2dgxj" event={"ID":"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38","Type":"ContainerStarted","Data":"f641f5141915e0e5383d1c25132b34d7f73e6431d60ee61f32ba0f43bc3081a9"} Oct 11 10:54:35.470325 master-2 kubenswrapper[4776]: I1011 10:54:35.470304 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" event={"ID":"70447ad9-31f0-4f6a-8c40-19fbe8141ada","Type":"ContainerStarted","Data":"459c94f48331f93d40730495be1474d6c1c89c8df43275f7b4ad519ea521cb3d"} Oct 11 10:54:35.470472 master-2 kubenswrapper[4776]: I1011 10:54:35.470446 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:35.474191 master-2 kubenswrapper[4776]: I1011 10:54:35.474126 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b597cbbf8-8sdfz" event={"ID":"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb","Type":"ContainerStarted","Data":"cd4fe9616d55b89de28b5a72a4c471009475f011558061464d4070a257189b82"} Oct 11 10:54:35.474191 master-2 kubenswrapper[4776]: I1011 10:54:35.474176 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b597cbbf8-8sdfz" event={"ID":"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb","Type":"ContainerStarted","Data":"1336b4e886e0824650469ba3f4a22a5540aa9fdd626ba77cdbde215582498998"} Oct 11 10:54:35.474287 master-2 kubenswrapper[4776]: I1011 10:54:35.474194 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b597cbbf8-8sdfz" event={"ID":"e2cc5d2d-290c-487f-a0f9-3095b17e1fcb","Type":"ContainerStarted","Data":"ccb08834d598ceb26cdcac6fff1ec4cb2c1cf59d30c091f534685ff710dd9097"} Oct 11 10:54:35.474287 master-2 kubenswrapper[4776]: I1011 10:54:35.474231 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:35.474350 master-2 kubenswrapper[4776]: I1011 10:54:35.474319 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:54:35.475681 master-2 kubenswrapper[4776]: I1011 10:54:35.475623 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-db-sync-4sh7r" event={"ID":"c4f2a1bf-160f-40ad-bc2c-a7286a90b988","Type":"ContainerStarted","Data":"fa7c55aca3df89af2240ffd38909152d52165a2a24214ef5226c24aacb7b9aca"} Oct 11 10:54:35.476830 master-2 kubenswrapper[4776]: I1011 10:54:35.476799 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz82h" event={"ID":"005f2579-b848-40fd-b3f3-2d3383344047","Type":"ContainerStarted","Data":"cc5b9c7250419dc166026542bf46853e7a0777efbedca2dc833bde7e55dd6960"} Oct 11 10:54:35.502528 master-2 kubenswrapper[4776]: I1011 10:54:35.501953 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" podStartSLOduration=2.501935446 podStartE2EDuration="2.501935446s" podCreationTimestamp="2025-10-11 10:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:35.498436651 +0000 UTC m=+1710.282863360" watchObservedRunningTime="2025-10-11 10:54:35.501935446 +0000 UTC m=+1710.286362155" Oct 11 10:54:35.543197 master-2 kubenswrapper[4776]: I1011 10:54:35.543118 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b597cbbf8-8sdfz" podStartSLOduration=2.543099221 podStartE2EDuration="2.543099221s" podCreationTimestamp="2025-10-11 10:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:35.535602278 +0000 UTC m=+1710.320028987" watchObservedRunningTime="2025-10-11 10:54:35.543099221 +0000 UTC m=+1710.327525930" Oct 11 10:54:36.407713 master-2 kubenswrapper[4776]: I1011 10:54:36.407648 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:36.454956 master-2 kubenswrapper[4776]: I1011 10:54:36.454894 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:36.455198 master-2 kubenswrapper[4776]: I1011 10:54:36.454963 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:36.489486 master-2 kubenswrapper[4776]: I1011 10:54:36.489433 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb4d9\" (UniqueName: \"kubernetes.io/projected/30996a86-1b86-4a67-bfea-0e63f7417196-kube-api-access-kb4d9\") pod \"30996a86-1b86-4a67-bfea-0e63f7417196\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " Oct 11 10:54:36.493574 master-2 kubenswrapper[4776]: I1011 10:54:36.493544 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:36.498855 master-2 kubenswrapper[4776]: I1011 10:54:36.498812 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-fernet-keys\") pod \"30996a86-1b86-4a67-bfea-0e63f7417196\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " Oct 11 10:54:36.498942 master-2 kubenswrapper[4776]: I1011 10:54:36.498926 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-config-data\") pod \"30996a86-1b86-4a67-bfea-0e63f7417196\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " Oct 11 10:54:36.499010 master-2 kubenswrapper[4776]: I1011 10:54:36.498993 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-credential-keys\") pod \"30996a86-1b86-4a67-bfea-0e63f7417196\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " Oct 11 10:54:36.499046 master-2 kubenswrapper[4776]: I1011 10:54:36.499024 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-scripts\") pod \"30996a86-1b86-4a67-bfea-0e63f7417196\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " Oct 11 10:54:36.499089 master-2 kubenswrapper[4776]: I1011 10:54:36.499053 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-combined-ca-bundle\") pod \"30996a86-1b86-4a67-bfea-0e63f7417196\" (UID: \"30996a86-1b86-4a67-bfea-0e63f7417196\") " Oct 11 10:54:36.517762 master-2 kubenswrapper[4776]: I1011 10:54:36.502838 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-scripts" (OuterVolumeSpecName: "scripts") pod "30996a86-1b86-4a67-bfea-0e63f7417196" (UID: "30996a86-1b86-4a67-bfea-0e63f7417196"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:36.518319 master-2 kubenswrapper[4776]: I1011 10:54:36.518271 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "30996a86-1b86-4a67-bfea-0e63f7417196" (UID: "30996a86-1b86-4a67-bfea-0e63f7417196"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:36.529982 master-2 kubenswrapper[4776]: I1011 10:54:36.529830 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30996a86-1b86-4a67-bfea-0e63f7417196-kube-api-access-kb4d9" (OuterVolumeSpecName: "kube-api-access-kb4d9") pod "30996a86-1b86-4a67-bfea-0e63f7417196" (UID: "30996a86-1b86-4a67-bfea-0e63f7417196"). InnerVolumeSpecName "kube-api-access-kb4d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:54:36.532153 master-2 kubenswrapper[4776]: I1011 10:54:36.532101 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xptqx" event={"ID":"30996a86-1b86-4a67-bfea-0e63f7417196","Type":"ContainerDied","Data":"740014b42f7f5352557d95dd0518b5f24506765ebf058a5db025a483fda2e186"} Oct 11 10:54:36.532153 master-2 kubenswrapper[4776]: I1011 10:54:36.532151 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="740014b42f7f5352557d95dd0518b5f24506765ebf058a5db025a483fda2e186" Oct 11 10:54:36.532270 master-2 kubenswrapper[4776]: I1011 10:54:36.532234 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xptqx" Oct 11 10:54:36.539946 master-2 kubenswrapper[4776]: I1011 10:54:36.539824 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "30996a86-1b86-4a67-bfea-0e63f7417196" (UID: "30996a86-1b86-4a67-bfea-0e63f7417196"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:36.553323 master-2 kubenswrapper[4776]: I1011 10:54:36.550749 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2dgxj" event={"ID":"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38","Type":"ContainerStarted","Data":"b7e0d5acf6bbdc53a5ba11187ce29782ecfb6106125d3631307f9dca40bcd06a"} Oct 11 10:54:36.553323 master-2 kubenswrapper[4776]: I1011 10:54:36.551600 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:36.578542 master-2 kubenswrapper[4776]: I1011 10:54:36.578486 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-config-data" (OuterVolumeSpecName: "config-data") pod "30996a86-1b86-4a67-bfea-0e63f7417196" (UID: "30996a86-1b86-4a67-bfea-0e63f7417196"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:36.585089 master-2 kubenswrapper[4776]: I1011 10:54:36.584948 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:36.599168 master-2 kubenswrapper[4776]: I1011 10:54:36.598252 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-2dgxj" podStartSLOduration=2.598202427 podStartE2EDuration="2.598202427s" podCreationTimestamp="2025-10-11 10:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:36.591466315 +0000 UTC m=+1711.375893024" watchObservedRunningTime="2025-10-11 10:54:36.598202427 +0000 UTC m=+1711.382629136" Oct 11 10:54:36.608475 master-2 kubenswrapper[4776]: I1011 10:54:36.604044 4776 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-credential-keys\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:36.608475 master-2 kubenswrapper[4776]: I1011 10:54:36.604087 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:36.608475 master-2 kubenswrapper[4776]: I1011 10:54:36.604101 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb4d9\" (UniqueName: \"kubernetes.io/projected/30996a86-1b86-4a67-bfea-0e63f7417196-kube-api-access-kb4d9\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:36.608475 master-2 kubenswrapper[4776]: I1011 10:54:36.604113 4776 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-fernet-keys\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:36.608475 master-2 kubenswrapper[4776]: I1011 10:54:36.604124 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:36.697937 master-2 kubenswrapper[4776]: I1011 10:54:36.697879 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30996a86-1b86-4a67-bfea-0e63f7417196" (UID: "30996a86-1b86-4a67-bfea-0e63f7417196"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:54:36.709796 master-2 kubenswrapper[4776]: I1011 10:54:36.709750 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30996a86-1b86-4a67-bfea-0e63f7417196-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:54:36.748997 master-2 kubenswrapper[4776]: I1011 10:54:36.747611 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-848fcbb4df-54n8l"] Oct 11 10:54:36.748997 master-2 kubenswrapper[4776]: E1011 10:54:36.747955 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30996a86-1b86-4a67-bfea-0e63f7417196" containerName="keystone-bootstrap" Oct 11 10:54:36.748997 master-2 kubenswrapper[4776]: I1011 10:54:36.747968 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="30996a86-1b86-4a67-bfea-0e63f7417196" containerName="keystone-bootstrap" Oct 11 10:54:36.748997 master-2 kubenswrapper[4776]: I1011 10:54:36.748131 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="30996a86-1b86-4a67-bfea-0e63f7417196" containerName="keystone-bootstrap" Oct 11 10:54:36.749322 master-2 kubenswrapper[4776]: I1011 10:54:36.749190 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.755748 master-2 kubenswrapper[4776]: I1011 10:54:36.754217 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 11 10:54:36.759626 master-2 kubenswrapper[4776]: I1011 10:54:36.759604 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 11 10:54:36.784214 master-2 kubenswrapper[4776]: I1011 10:54:36.783598 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-848fcbb4df-54n8l"] Oct 11 10:54:36.914034 master-2 kubenswrapper[4776]: I1011 10:54:36.913966 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpjx8\" (UniqueName: \"kubernetes.io/projected/2ec8aba5-db0a-4c5f-a876-c513af95f945-kube-api-access-kpjx8\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914217 master-2 kubenswrapper[4776]: I1011 10:54:36.914098 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-public-tls-certs\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914217 master-2 kubenswrapper[4776]: I1011 10:54:36.914141 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-internal-tls-certs\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914217 master-2 kubenswrapper[4776]: I1011 10:54:36.914190 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-config-data\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914217 master-2 kubenswrapper[4776]: I1011 10:54:36.914210 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-credential-keys\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914789 master-2 kubenswrapper[4776]: I1011 10:54:36.914354 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-combined-ca-bundle\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914789 master-2 kubenswrapper[4776]: I1011 10:54:36.914415 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-scripts\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:36.914789 master-2 kubenswrapper[4776]: I1011 10:54:36.914622 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-fernet-keys\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.016376 master-2 kubenswrapper[4776]: I1011 10:54:37.016333 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-combined-ca-bundle\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.016376 master-2 kubenswrapper[4776]: I1011 10:54:37.016381 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-scripts\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.017213 master-2 kubenswrapper[4776]: I1011 10:54:37.016449 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-fernet-keys\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.017213 master-2 kubenswrapper[4776]: I1011 10:54:37.016499 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpjx8\" (UniqueName: \"kubernetes.io/projected/2ec8aba5-db0a-4c5f-a876-c513af95f945-kube-api-access-kpjx8\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.017213 master-2 kubenswrapper[4776]: I1011 10:54:37.016555 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-public-tls-certs\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.017213 master-2 kubenswrapper[4776]: I1011 10:54:37.016591 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-internal-tls-certs\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.017213 master-2 kubenswrapper[4776]: I1011 10:54:37.016658 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-config-data\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.017213 master-2 kubenswrapper[4776]: I1011 10:54:37.016689 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-credential-keys\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.021638 master-2 kubenswrapper[4776]: I1011 10:54:37.021590 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-public-tls-certs\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.021638 master-2 kubenswrapper[4776]: I1011 10:54:37.021625 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-internal-tls-certs\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.022174 master-2 kubenswrapper[4776]: I1011 10:54:37.022126 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-combined-ca-bundle\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.023196 master-2 kubenswrapper[4776]: I1011 10:54:37.022398 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-credential-keys\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.024362 master-2 kubenswrapper[4776]: I1011 10:54:37.024320 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-fernet-keys\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.024556 master-2 kubenswrapper[4776]: I1011 10:54:37.024522 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-scripts\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.027469 master-2 kubenswrapper[4776]: I1011 10:54:37.027299 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec8aba5-db0a-4c5f-a876-c513af95f945-config-data\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.039262 master-2 kubenswrapper[4776]: I1011 10:54:37.038799 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpjx8\" (UniqueName: \"kubernetes.io/projected/2ec8aba5-db0a-4c5f-a876-c513af95f945-kube-api-access-kpjx8\") pod \"keystone-848fcbb4df-54n8l\" (UID: \"2ec8aba5-db0a-4c5f-a876-c513af95f945\") " pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.066882 master-2 kubenswrapper[4776]: I1011 10:54:37.066829 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:37.510912 master-2 kubenswrapper[4776]: W1011 10:54:37.505151 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ec8aba5_db0a_4c5f_a876_c513af95f945.slice/crio-f27cb0c0bb58af608a20b3614ded308c9f6b525acf61da80a34f310f8d2fe543 WatchSource:0}: Error finding container f27cb0c0bb58af608a20b3614ded308c9f6b525acf61da80a34f310f8d2fe543: Status 404 returned error can't find the container with id f27cb0c0bb58af608a20b3614ded308c9f6b525acf61da80a34f310f8d2fe543 Oct 11 10:54:37.510912 master-2 kubenswrapper[4776]: I1011 10:54:37.505558 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-848fcbb4df-54n8l"] Oct 11 10:54:37.566618 master-2 kubenswrapper[4776]: I1011 10:54:37.566518 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-848fcbb4df-54n8l" event={"ID":"2ec8aba5-db0a-4c5f-a876-c513af95f945","Type":"ContainerStarted","Data":"f27cb0c0bb58af608a20b3614ded308c9f6b525acf61da80a34f310f8d2fe543"} Oct 11 10:54:37.566618 master-2 kubenswrapper[4776]: I1011 10:54:37.566594 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:37.706601 master-2 kubenswrapper[4776]: I1011 10:54:37.706553 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-n7nm2"] Oct 11 10:54:37.709956 master-2 kubenswrapper[4776]: I1011 10:54:37.709572 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.714702 master-2 kubenswrapper[4776]: I1011 10:54:37.714143 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Oct 11 10:54:37.714702 master-2 kubenswrapper[4776]: I1011 10:54:37.714358 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Oct 11 10:54:37.718706 master-2 kubenswrapper[4776]: I1011 10:54:37.718630 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-n7nm2"] Oct 11 10:54:37.834772 master-2 kubenswrapper[4776]: I1011 10:54:37.834720 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh8v5\" (UniqueName: \"kubernetes.io/projected/4a1c4d38-1f25-4465-9976-43be28a3b282-kube-api-access-fh8v5\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.834963 master-2 kubenswrapper[4776]: I1011 10:54:37.834791 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-combined-ca-bundle\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.834963 master-2 kubenswrapper[4776]: I1011 10:54:37.834856 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-scripts\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.835048 master-2 kubenswrapper[4776]: I1011 10:54:37.834978 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.835048 master-2 kubenswrapper[4776]: I1011 10:54:37.835012 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4a1c4d38-1f25-4465-9976-43be28a3b282-etc-podinfo\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.835481 master-2 kubenswrapper[4776]: I1011 10:54:37.835318 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data-merged\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.936627 master-2 kubenswrapper[4776]: I1011 10:54:37.936572 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.936627 master-2 kubenswrapper[4776]: I1011 10:54:37.936629 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4a1c4d38-1f25-4465-9976-43be28a3b282-etc-podinfo\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.936627 master-2 kubenswrapper[4776]: I1011 10:54:37.936653 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data-merged\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.936978 master-2 kubenswrapper[4776]: I1011 10:54:37.936747 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh8v5\" (UniqueName: \"kubernetes.io/projected/4a1c4d38-1f25-4465-9976-43be28a3b282-kube-api-access-fh8v5\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.936978 master-2 kubenswrapper[4776]: I1011 10:54:37.936771 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-combined-ca-bundle\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.936978 master-2 kubenswrapper[4776]: I1011 10:54:37.936791 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-scripts\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.937711 master-2 kubenswrapper[4776]: I1011 10:54:37.937668 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data-merged\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.941080 master-2 kubenswrapper[4776]: I1011 10:54:37.941058 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-scripts\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.941180 master-2 kubenswrapper[4776]: I1011 10:54:37.941062 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4a1c4d38-1f25-4465-9976-43be28a3b282-etc-podinfo\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.941642 master-2 kubenswrapper[4776]: I1011 10:54:37.941612 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-combined-ca-bundle\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.941764 master-2 kubenswrapper[4776]: I1011 10:54:37.941727 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:37.961568 master-2 kubenswrapper[4776]: I1011 10:54:37.961391 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh8v5\" (UniqueName: \"kubernetes.io/projected/4a1c4d38-1f25-4465-9976-43be28a3b282-kube-api-access-fh8v5\") pod \"ironic-db-sync-n7nm2\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:38.031414 master-2 kubenswrapper[4776]: I1011 10:54:38.031351 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:54:38.486313 master-2 kubenswrapper[4776]: I1011 10:54:38.481639 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-n7nm2"] Oct 11 10:54:38.505088 master-2 kubenswrapper[4776]: W1011 10:54:38.504922 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a1c4d38_1f25_4465_9976_43be28a3b282.slice/crio-08c94658c701837f72a72b62e6464a19fa3e877f799281df15d5cc453d495789 WatchSource:0}: Error finding container 08c94658c701837f72a72b62e6464a19fa3e877f799281df15d5cc453d495789: Status 404 returned error can't find the container with id 08c94658c701837f72a72b62e6464a19fa3e877f799281df15d5cc453d495789 Oct 11 10:54:38.576076 master-2 kubenswrapper[4776]: I1011 10:54:38.576029 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-n7nm2" event={"ID":"4a1c4d38-1f25-4465-9976-43be28a3b282","Type":"ContainerStarted","Data":"08c94658c701837f72a72b62e6464a19fa3e877f799281df15d5cc453d495789"} Oct 11 10:54:38.578614 master-2 kubenswrapper[4776]: I1011 10:54:38.578369 4776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:54:38.579952 master-2 kubenswrapper[4776]: I1011 10:54:38.579477 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-848fcbb4df-54n8l" event={"ID":"2ec8aba5-db0a-4c5f-a876-c513af95f945","Type":"ContainerStarted","Data":"2f2b5056a73fd316db0802a3f82120d51f77b82d3025ad8fc6ca4a8cf2a50912"} Oct 11 10:54:38.579952 master-2 kubenswrapper[4776]: I1011 10:54:38.579568 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:54:38.797636 master-2 kubenswrapper[4776]: I1011 10:54:38.797479 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:38.803618 master-2 kubenswrapper[4776]: I1011 10:54:38.803559 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:54:38.836690 master-2 kubenswrapper[4776]: I1011 10:54:38.836585 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-848fcbb4df-54n8l" podStartSLOduration=2.836563102 podStartE2EDuration="2.836563102s" podCreationTimestamp="2025-10-11 10:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:54:38.636042342 +0000 UTC m=+1713.420469061" watchObservedRunningTime="2025-10-11 10:54:38.836563102 +0000 UTC m=+1713.620989811" Oct 11 10:54:39.922991 master-2 kubenswrapper[4776]: I1011 10:54:39.919069 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:39.922991 master-2 kubenswrapper[4776]: I1011 10:54:39.919131 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:39.949325 master-2 kubenswrapper[4776]: I1011 10:54:39.949256 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:39.966406 master-2 kubenswrapper[4776]: I1011 10:54:39.966346 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:40.599959 master-2 kubenswrapper[4776]: I1011 10:54:40.599837 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:40.600111 master-2 kubenswrapper[4776]: I1011 10:54:40.600057 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:42.623636 master-2 kubenswrapper[4776]: I1011 10:54:42.623504 4776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:54:42.623636 master-2 kubenswrapper[4776]: I1011 10:54:42.623572 4776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:54:42.636828 master-2 kubenswrapper[4776]: I1011 10:54:42.636786 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:42.638074 master-2 kubenswrapper[4776]: I1011 10:54:42.638026 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:54:43.443445 master-2 kubenswrapper[4776]: I1011 10:54:43.443370 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:54:52.713759 master-2 kubenswrapper[4776]: I1011 10:54:52.713628 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz82h" event={"ID":"005f2579-b848-40fd-b3f3-2d3383344047","Type":"ContainerStarted","Data":"5943503815e84fefb31e290c68a69b97ca3d79be2036bfcb024f274a60831171"} Oct 11 10:54:52.734037 master-2 kubenswrapper[4776]: I1011 10:54:52.733945 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-nz82h" podStartSLOduration=2.308301062 podStartE2EDuration="19.733927674s" podCreationTimestamp="2025-10-11 10:54:33 +0000 UTC" firstStartedPulling="2025-10-11 10:54:34.708136657 +0000 UTC m=+1709.492563366" lastFinishedPulling="2025-10-11 10:54:52.133763269 +0000 UTC m=+1726.918189978" observedRunningTime="2025-10-11 10:54:52.731565211 +0000 UTC m=+1727.515991930" watchObservedRunningTime="2025-10-11 10:54:52.733927674 +0000 UTC m=+1727.518354383" Oct 11 10:54:53.731865 master-2 kubenswrapper[4776]: I1011 10:54:53.731788 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-db-sync-4sh7r" event={"ID":"c4f2a1bf-160f-40ad-bc2c-a7286a90b988","Type":"ContainerStarted","Data":"076657b6f64fe541979b78922896c11acb7556c05c2c695c6e5189bd99136f77"} Oct 11 10:54:53.760513 master-2 kubenswrapper[4776]: I1011 10:54:53.760339 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-db-sync-4sh7r" podStartSLOduration=2.511494488 podStartE2EDuration="19.760325294s" podCreationTimestamp="2025-10-11 10:54:34 +0000 UTC" firstStartedPulling="2025-10-11 10:54:35.040108917 +0000 UTC m=+1709.824535626" lastFinishedPulling="2025-10-11 10:54:52.288939723 +0000 UTC m=+1727.073366432" observedRunningTime="2025-10-11 10:54:53.759988335 +0000 UTC m=+1728.544415044" watchObservedRunningTime="2025-10-11 10:54:53.760325294 +0000 UTC m=+1728.544752003" Oct 11 10:54:58.769762 master-2 kubenswrapper[4776]: I1011 10:54:58.769665 4776 generic.go:334] "Generic (PLEG): container finished" podID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerID="3eac08206e51e42747d734fbad286ecc138ff94d119ee5ffe85a0b9dac4348e7" exitCode=0 Oct 11 10:54:58.770483 master-2 kubenswrapper[4776]: I1011 10:54:58.769786 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-n7nm2" event={"ID":"4a1c4d38-1f25-4465-9976-43be28a3b282","Type":"ContainerDied","Data":"3eac08206e51e42747d734fbad286ecc138ff94d119ee5ffe85a0b9dac4348e7"} Oct 11 10:54:59.783774 master-2 kubenswrapper[4776]: I1011 10:54:59.783690 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-n7nm2" event={"ID":"4a1c4d38-1f25-4465-9976-43be28a3b282","Type":"ContainerStarted","Data":"a51a44def94d3717c345488e50d0f116c0777018eb329e198859a513f723b71f"} Oct 11 10:54:59.786743 master-2 kubenswrapper[4776]: I1011 10:54:59.786705 4776 generic.go:334] "Generic (PLEG): container finished" podID="005f2579-b848-40fd-b3f3-2d3383344047" containerID="5943503815e84fefb31e290c68a69b97ca3d79be2036bfcb024f274a60831171" exitCode=0 Oct 11 10:54:59.787148 master-2 kubenswrapper[4776]: I1011 10:54:59.786861 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz82h" event={"ID":"005f2579-b848-40fd-b3f3-2d3383344047","Type":"ContainerDied","Data":"5943503815e84fefb31e290c68a69b97ca3d79be2036bfcb024f274a60831171"} Oct 11 10:54:59.815572 master-2 kubenswrapper[4776]: I1011 10:54:59.815496 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-n7nm2" podStartSLOduration=3.407428689 podStartE2EDuration="22.815479894s" podCreationTimestamp="2025-10-11 10:54:37 +0000 UTC" firstStartedPulling="2025-10-11 10:54:38.506925535 +0000 UTC m=+1713.291352234" lastFinishedPulling="2025-10-11 10:54:57.91497673 +0000 UTC m=+1732.699403439" observedRunningTime="2025-10-11 10:54:59.813216892 +0000 UTC m=+1734.597643601" watchObservedRunningTime="2025-10-11 10:54:59.815479894 +0000 UTC m=+1734.599906603" Oct 11 10:55:01.643631 master-2 kubenswrapper[4776]: I1011 10:55:01.643572 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz82h" Oct 11 10:55:01.748555 master-2 kubenswrapper[4776]: I1011 10:55:01.748491 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf8n6\" (UniqueName: \"kubernetes.io/projected/005f2579-b848-40fd-b3f3-2d3383344047-kube-api-access-hf8n6\") pod \"005f2579-b848-40fd-b3f3-2d3383344047\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " Oct 11 10:55:01.748764 master-2 kubenswrapper[4776]: I1011 10:55:01.748640 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-combined-ca-bundle\") pod \"005f2579-b848-40fd-b3f3-2d3383344047\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " Oct 11 10:55:01.748764 master-2 kubenswrapper[4776]: I1011 10:55:01.748708 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-config-data\") pod \"005f2579-b848-40fd-b3f3-2d3383344047\" (UID: \"005f2579-b848-40fd-b3f3-2d3383344047\") " Oct 11 10:55:01.767321 master-2 kubenswrapper[4776]: I1011 10:55:01.764489 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005f2579-b848-40fd-b3f3-2d3383344047-kube-api-access-hf8n6" (OuterVolumeSpecName: "kube-api-access-hf8n6") pod "005f2579-b848-40fd-b3f3-2d3383344047" (UID: "005f2579-b848-40fd-b3f3-2d3383344047"). InnerVolumeSpecName "kube-api-access-hf8n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:01.785401 master-2 kubenswrapper[4776]: I1011 10:55:01.785321 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "005f2579-b848-40fd-b3f3-2d3383344047" (UID: "005f2579-b848-40fd-b3f3-2d3383344047"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:01.804792 master-2 kubenswrapper[4776]: I1011 10:55:01.804617 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-config-data" (OuterVolumeSpecName: "config-data") pod "005f2579-b848-40fd-b3f3-2d3383344047" (UID: "005f2579-b848-40fd-b3f3-2d3383344047"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:01.807146 master-2 kubenswrapper[4776]: I1011 10:55:01.807085 4776 generic.go:334] "Generic (PLEG): container finished" podID="c4f2a1bf-160f-40ad-bc2c-a7286a90b988" containerID="076657b6f64fe541979b78922896c11acb7556c05c2c695c6e5189bd99136f77" exitCode=0 Oct 11 10:55:01.807247 master-2 kubenswrapper[4776]: I1011 10:55:01.807145 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-db-sync-4sh7r" event={"ID":"c4f2a1bf-160f-40ad-bc2c-a7286a90b988","Type":"ContainerDied","Data":"076657b6f64fe541979b78922896c11acb7556c05c2c695c6e5189bd99136f77"} Oct 11 10:55:01.808628 master-2 kubenswrapper[4776]: I1011 10:55:01.808590 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-nz82h" Oct 11 10:55:01.808628 master-2 kubenswrapper[4776]: I1011 10:55:01.808614 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-nz82h" event={"ID":"005f2579-b848-40fd-b3f3-2d3383344047","Type":"ContainerDied","Data":"cc5b9c7250419dc166026542bf46853e7a0777efbedca2dc833bde7e55dd6960"} Oct 11 10:55:01.808762 master-2 kubenswrapper[4776]: I1011 10:55:01.808644 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc5b9c7250419dc166026542bf46853e7a0777efbedca2dc833bde7e55dd6960" Oct 11 10:55:01.810361 master-2 kubenswrapper[4776]: I1011 10:55:01.810298 4776 generic.go:334] "Generic (PLEG): container finished" podID="d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" containerID="b7e0d5acf6bbdc53a5ba11187ce29782ecfb6106125d3631307f9dca40bcd06a" exitCode=0 Oct 11 10:55:01.810361 master-2 kubenswrapper[4776]: I1011 10:55:01.810359 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2dgxj" event={"ID":"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38","Type":"ContainerDied","Data":"b7e0d5acf6bbdc53a5ba11187ce29782ecfb6106125d3631307f9dca40bcd06a"} Oct 11 10:55:01.851016 master-2 kubenswrapper[4776]: I1011 10:55:01.850947 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:01.851016 master-2 kubenswrapper[4776]: I1011 10:55:01.851013 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005f2579-b848-40fd-b3f3-2d3383344047-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:01.851171 master-2 kubenswrapper[4776]: I1011 10:55:01.851024 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf8n6\" (UniqueName: \"kubernetes.io/projected/005f2579-b848-40fd-b3f3-2d3383344047-kube-api-access-hf8n6\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:03.797298 master-2 kubenswrapper[4776]: I1011 10:55:03.797269 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:55:03.803032 master-2 kubenswrapper[4776]: I1011 10:55:03.802994 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:55:03.854040 master-2 kubenswrapper[4776]: I1011 10:55:03.853950 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2dgxj" event={"ID":"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38","Type":"ContainerDied","Data":"f641f5141915e0e5383d1c25132b34d7f73e6431d60ee61f32ba0f43bc3081a9"} Oct 11 10:55:03.854241 master-2 kubenswrapper[4776]: I1011 10:55:03.854202 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f641f5141915e0e5383d1c25132b34d7f73e6431d60ee61f32ba0f43bc3081a9" Oct 11 10:55:03.854591 master-2 kubenswrapper[4776]: I1011 10:55:03.854517 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2dgxj" Oct 11 10:55:03.856907 master-2 kubenswrapper[4776]: I1011 10:55:03.856732 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-db-sync-4sh7r" event={"ID":"c4f2a1bf-160f-40ad-bc2c-a7286a90b988","Type":"ContainerDied","Data":"fa7c55aca3df89af2240ffd38909152d52165a2a24214ef5226c24aacb7b9aca"} Oct 11 10:55:03.856907 master-2 kubenswrapper[4776]: I1011 10:55:03.856800 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa7c55aca3df89af2240ffd38909152d52165a2a24214ef5226c24aacb7b9aca" Oct 11 10:55:03.856907 master-2 kubenswrapper[4776]: I1011 10:55:03.856808 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-db-sync-4sh7r" Oct 11 10:55:03.924014 master-2 kubenswrapper[4776]: I1011 10:55:03.923943 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-etc-machine-id\") pod \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " Oct 11 10:55:03.924225 master-2 kubenswrapper[4776]: I1011 10:55:03.924037 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-db-sync-config-data\") pod \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " Oct 11 10:55:03.924225 master-2 kubenswrapper[4776]: I1011 10:55:03.924091 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-scripts\") pod \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " Oct 11 10:55:03.924225 master-2 kubenswrapper[4776]: I1011 10:55:03.924193 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkn85\" (UniqueName: \"kubernetes.io/projected/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-kube-api-access-dkn85\") pod \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " Oct 11 10:55:03.924321 master-2 kubenswrapper[4776]: I1011 10:55:03.924251 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-config\") pod \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " Oct 11 10:55:03.924321 master-2 kubenswrapper[4776]: I1011 10:55:03.924292 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmbf9\" (UniqueName: \"kubernetes.io/projected/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-kube-api-access-rmbf9\") pod \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " Oct 11 10:55:03.924321 master-2 kubenswrapper[4776]: I1011 10:55:03.924309 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-combined-ca-bundle\") pod \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\" (UID: \"d90a5c6e-6cd2-4396-b38c-dc0e03da9d38\") " Oct 11 10:55:03.924408 master-2 kubenswrapper[4776]: I1011 10:55:03.924331 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-combined-ca-bundle\") pod \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " Oct 11 10:55:03.924408 master-2 kubenswrapper[4776]: I1011 10:55:03.924351 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-config-data\") pod \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\" (UID: \"c4f2a1bf-160f-40ad-bc2c-a7286a90b988\") " Oct 11 10:55:03.925896 master-2 kubenswrapper[4776]: I1011 10:55:03.925860 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c4f2a1bf-160f-40ad-bc2c-a7286a90b988" (UID: "c4f2a1bf-160f-40ad-bc2c-a7286a90b988"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:03.930509 master-2 kubenswrapper[4776]: I1011 10:55:03.928834 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-kube-api-access-rmbf9" (OuterVolumeSpecName: "kube-api-access-rmbf9") pod "d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" (UID: "d90a5c6e-6cd2-4396-b38c-dc0e03da9d38"). InnerVolumeSpecName "kube-api-access-rmbf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:03.930509 master-2 kubenswrapper[4776]: I1011 10:55:03.928968 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c4f2a1bf-160f-40ad-bc2c-a7286a90b988" (UID: "c4f2a1bf-160f-40ad-bc2c-a7286a90b988"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:03.930509 master-2 kubenswrapper[4776]: I1011 10:55:03.929306 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-kube-api-access-dkn85" (OuterVolumeSpecName: "kube-api-access-dkn85") pod "c4f2a1bf-160f-40ad-bc2c-a7286a90b988" (UID: "c4f2a1bf-160f-40ad-bc2c-a7286a90b988"). InnerVolumeSpecName "kube-api-access-dkn85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:03.930509 master-2 kubenswrapper[4776]: I1011 10:55:03.929343 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-scripts" (OuterVolumeSpecName: "scripts") pod "c4f2a1bf-160f-40ad-bc2c-a7286a90b988" (UID: "c4f2a1bf-160f-40ad-bc2c-a7286a90b988"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:03.950032 master-2 kubenswrapper[4776]: I1011 10:55:03.949908 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4f2a1bf-160f-40ad-bc2c-a7286a90b988" (UID: "c4f2a1bf-160f-40ad-bc2c-a7286a90b988"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:03.953776 master-2 kubenswrapper[4776]: I1011 10:55:03.953709 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" (UID: "d90a5c6e-6cd2-4396-b38c-dc0e03da9d38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:03.972608 master-2 kubenswrapper[4776]: I1011 10:55:03.972542 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-config-data" (OuterVolumeSpecName: "config-data") pod "c4f2a1bf-160f-40ad-bc2c-a7286a90b988" (UID: "c4f2a1bf-160f-40ad-bc2c-a7286a90b988"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:03.976744 master-2 kubenswrapper[4776]: I1011 10:55:03.976686 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-config" (OuterVolumeSpecName: "config") pod "d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" (UID: "d90a5c6e-6cd2-4396-b38c-dc0e03da9d38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.025964 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkn85\" (UniqueName: \"kubernetes.io/projected/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-kube-api-access-dkn85\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026001 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026011 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmbf9\" (UniqueName: \"kubernetes.io/projected/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-kube-api-access-rmbf9\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026021 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026031 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026040 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026048 4776 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-etc-machine-id\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026056 4776 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-db-sync-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.026180 master-2 kubenswrapper[4776]: I1011 10:55:04.026063 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f2a1bf-160f-40ad-bc2c-a7286a90b988-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:04.177469 master-2 kubenswrapper[4776]: I1011 10:55:04.177416 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:04.177765 master-2 kubenswrapper[4776]: E1011 10:55:04.177740 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005f2579-b848-40fd-b3f3-2d3383344047" containerName="heat-db-sync" Oct 11 10:55:04.177765 master-2 kubenswrapper[4776]: I1011 10:55:04.177760 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="005f2579-b848-40fd-b3f3-2d3383344047" containerName="heat-db-sync" Oct 11 10:55:04.177849 master-2 kubenswrapper[4776]: E1011 10:55:04.177780 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" containerName="neutron-db-sync" Oct 11 10:55:04.177849 master-2 kubenswrapper[4776]: I1011 10:55:04.177787 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" containerName="neutron-db-sync" Oct 11 10:55:04.177849 master-2 kubenswrapper[4776]: E1011 10:55:04.177813 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f2a1bf-160f-40ad-bc2c-a7286a90b988" containerName="cinder-b5802-db-sync" Oct 11 10:55:04.177849 master-2 kubenswrapper[4776]: I1011 10:55:04.177820 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f2a1bf-160f-40ad-bc2c-a7286a90b988" containerName="cinder-b5802-db-sync" Oct 11 10:55:04.178021 master-2 kubenswrapper[4776]: I1011 10:55:04.177992 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" containerName="neutron-db-sync" Oct 11 10:55:04.178021 master-2 kubenswrapper[4776]: I1011 10:55:04.178012 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f2a1bf-160f-40ad-bc2c-a7286a90b988" containerName="cinder-b5802-db-sync" Oct 11 10:55:04.178096 master-2 kubenswrapper[4776]: I1011 10:55:04.178025 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="005f2579-b848-40fd-b3f3-2d3383344047" containerName="heat-db-sync" Oct 11 10:55:04.178973 master-2 kubenswrapper[4776]: I1011 10:55:04.178851 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.185823 master-2 kubenswrapper[4776]: I1011 10:55:04.185650 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-scripts" Oct 11 10:55:04.187072 master-2 kubenswrapper[4776]: I1011 10:55:04.186855 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-scheduler-config-data" Oct 11 10:55:04.187393 master-2 kubenswrapper[4776]: I1011 10:55:04.187208 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-config-data" Oct 11 10:55:04.216645 master-2 kubenswrapper[4776]: I1011 10:55:04.216387 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:04.272699 master-2 kubenswrapper[4776]: I1011 10:55:04.260282 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.272699 master-2 kubenswrapper[4776]: I1011 10:55:04.260361 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-scripts\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.272699 master-2 kubenswrapper[4776]: I1011 10:55:04.260441 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data-custom\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.272699 master-2 kubenswrapper[4776]: I1011 10:55:04.260516 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e1008bd-5444-4490-8e34-8a7843bf5c45-etc-machine-id\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.272699 master-2 kubenswrapper[4776]: I1011 10:55:04.260597 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlmpd\" (UniqueName: \"kubernetes.io/projected/6e1008bd-5444-4490-8e34-8a7843bf5c45-kube-api-access-jlmpd\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.272699 master-2 kubenswrapper[4776]: I1011 10:55:04.260618 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-combined-ca-bundle\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366288 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlmpd\" (UniqueName: \"kubernetes.io/projected/6e1008bd-5444-4490-8e34-8a7843bf5c45-kube-api-access-jlmpd\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366340 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-combined-ca-bundle\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366407 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366434 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-scripts\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366481 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data-custom\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366602 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e1008bd-5444-4490-8e34-8a7843bf5c45-etc-machine-id\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.366768 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e1008bd-5444-4490-8e34-8a7843bf5c45-etc-machine-id\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.375122 master-2 kubenswrapper[4776]: I1011 10:55:04.371286 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-combined-ca-bundle\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.377439 master-2 kubenswrapper[4776]: I1011 10:55:04.377407 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-scripts\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.392788 master-2 kubenswrapper[4776]: I1011 10:55:04.388620 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data-custom\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.400307 master-2 kubenswrapper[4776]: I1011 10:55:04.399776 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.409275 master-2 kubenswrapper[4776]: I1011 10:55:04.406824 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlmpd\" (UniqueName: \"kubernetes.io/projected/6e1008bd-5444-4490-8e34-8a7843bf5c45-kube-api-access-jlmpd\") pod \"cinder-b5802-scheduler-0\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.438985 master-2 kubenswrapper[4776]: I1011 10:55:04.434583 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:04.438985 master-2 kubenswrapper[4776]: I1011 10:55:04.436447 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.453810 master-2 kubenswrapper[4776]: I1011 10:55:04.451410 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-backup-config-data" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479090 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-brick\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479141 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data-custom\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479159 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-dev\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479176 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-machine-id\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479194 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-lib-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479217 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479240 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-iscsi\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479278 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-combined-ca-bundle\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479318 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-sys\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479341 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n864d\" (UniqueName: \"kubernetes.io/projected/64da8a05-f383-4643-b08d-639963f8bdd5-kube-api-access-n864d\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479362 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479384 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-nvme\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479421 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-run\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479460 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-scripts\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.500245 master-2 kubenswrapper[4776]: I1011 10:55:04.479480 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-lib-modules\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.518270 master-2 kubenswrapper[4776]: I1011 10:55:04.518231 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:04.520448 master-2 kubenswrapper[4776]: I1011 10:55:04.520425 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:04.568698 master-2 kubenswrapper[4776]: I1011 10:55:04.568620 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7887b79bcd-4lcts"] Oct 11 10:55:04.573699 master-2 kubenswrapper[4776]: I1011 10:55:04.570072 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.579635 master-2 kubenswrapper[4776]: I1011 10:55:04.578291 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 11 10:55:04.579635 master-2 kubenswrapper[4776]: I1011 10:55:04.578647 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 11 10:55:04.579635 master-2 kubenswrapper[4776]: I1011 10:55:04.578797 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582042 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-sys\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582086 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n864d\" (UniqueName: \"kubernetes.io/projected/64da8a05-f383-4643-b08d-639963f8bdd5-kube-api-access-n864d\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582110 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582139 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-nvme\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582177 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-run\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582218 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-scripts\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582254 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-lib-modules\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582286 master-2 kubenswrapper[4776]: I1011 10:55:04.582275 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-brick\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582635 master-2 kubenswrapper[4776]: I1011 10:55:04.582291 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data-custom\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582635 master-2 kubenswrapper[4776]: I1011 10:55:04.582355 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-dev\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.582635 master-2 kubenswrapper[4776]: I1011 10:55:04.582369 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-machine-id\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.583643 master-2 kubenswrapper[4776]: I1011 10:55:04.583475 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-lib-modules\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.583643 master-2 kubenswrapper[4776]: I1011 10:55:04.583595 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-brick\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.587324 master-2 kubenswrapper[4776]: I1011 10:55:04.585783 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-lib-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.587324 master-2 kubenswrapper[4776]: I1011 10:55:04.586714 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-scripts\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.588025 master-2 kubenswrapper[4776]: I1011 10:55:04.588001 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data-custom\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.596879 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-run\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.596919 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-sys\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.597170 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-nvme\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.597202 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-machine-id\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.597219 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-dev\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.597299 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-lib-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.602299 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.602423 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-iscsi\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.602525 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-combined-ca-bundle\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.602499 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.602606 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-iscsi\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.609097 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.615864 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7887b79bcd-4lcts"] Oct 11 10:55:04.631083 master-2 kubenswrapper[4776]: I1011 10:55:04.622203 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-combined-ca-bundle\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.652751 master-2 kubenswrapper[4776]: I1011 10:55:04.633615 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n864d\" (UniqueName: \"kubernetes.io/projected/64da8a05-f383-4643-b08d-639963f8bdd5-kube-api-access-n864d\") pod \"cinder-b5802-backup-0\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.689741 master-2 kubenswrapper[4776]: I1011 10:55:04.689694 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9f6b86c79-52ppr"] Oct 11 10:55:04.693086 master-2 kubenswrapper[4776]: I1011 10:55:04.692717 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.712557 master-2 kubenswrapper[4776]: I1011 10:55:04.711402 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-config\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.712557 master-2 kubenswrapper[4776]: I1011 10:55:04.712154 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-combined-ca-bundle\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.712557 master-2 kubenswrapper[4776]: I1011 10:55:04.712290 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-ovndb-tls-certs\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.712557 master-2 kubenswrapper[4776]: I1011 10:55:04.712327 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfzck\" (UniqueName: \"kubernetes.io/projected/60f1a3e8-20d2-48e9-842c-9312ce07efe0-kube-api-access-kfzck\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.712557 master-2 kubenswrapper[4776]: I1011 10:55:04.712444 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-httpd-config\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.731307 master-2 kubenswrapper[4776]: I1011 10:55:04.731239 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f6b86c79-52ppr"] Oct 11 10:55:04.776829 master-2 kubenswrapper[4776]: I1011 10:55:04.776764 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:04.781137 master-2 kubenswrapper[4776]: I1011 10:55:04.780646 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.790010 master-2 kubenswrapper[4776]: I1011 10:55:04.789966 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-api-config-data" Oct 11 10:55:04.792717 master-2 kubenswrapper[4776]: I1011 10:55:04.792630 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:04.813396 master-2 kubenswrapper[4776]: I1011 10:55:04.813351 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-httpd-config\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.813869 master-2 kubenswrapper[4776]: I1011 10:55:04.813460 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-config\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.813869 master-2 kubenswrapper[4776]: I1011 10:55:04.813524 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4jjl\" (UniqueName: \"kubernetes.io/projected/941fb918-e4b8-4ef7-9ad1-9af907c5593a-kube-api-access-k4jjl\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.813869 master-2 kubenswrapper[4776]: I1011 10:55:04.813555 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-sb\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.813869 master-2 kubenswrapper[4776]: I1011 10:55:04.813615 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-svc\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.813869 master-2 kubenswrapper[4776]: I1011 10:55:04.813791 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-config\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.814022 master-2 kubenswrapper[4776]: I1011 10:55:04.813901 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-swift-storage-0\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.814022 master-2 kubenswrapper[4776]: I1011 10:55:04.813988 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-combined-ca-bundle\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.814704 master-2 kubenswrapper[4776]: I1011 10:55:04.814600 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-ovndb-tls-certs\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.814704 master-2 kubenswrapper[4776]: I1011 10:55:04.814627 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-nb\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.814704 master-2 kubenswrapper[4776]: I1011 10:55:04.814656 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfzck\" (UniqueName: \"kubernetes.io/projected/60f1a3e8-20d2-48e9-842c-9312ce07efe0-kube-api-access-kfzck\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.821197 master-2 kubenswrapper[4776]: I1011 10:55:04.821155 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-config\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.822540 master-2 kubenswrapper[4776]: I1011 10:55:04.822505 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-ovndb-tls-certs\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.825025 master-2 kubenswrapper[4776]: I1011 10:55:04.824990 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-combined-ca-bundle\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.842508 master-2 kubenswrapper[4776]: I1011 10:55:04.842458 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-httpd-config\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.847827 master-2 kubenswrapper[4776]: I1011 10:55:04.847771 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfzck\" (UniqueName: \"kubernetes.io/projected/60f1a3e8-20d2-48e9-842c-9312ce07efe0-kube-api-access-kfzck\") pod \"neutron-7887b79bcd-4lcts\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.857603 master-2 kubenswrapper[4776]: I1011 10:55:04.857558 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:04.896912 master-2 kubenswrapper[4776]: I1011 10:55:04.896857 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:04.915924 master-2 kubenswrapper[4776]: I1011 10:55:04.915864 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-config\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.916119 master-2 kubenswrapper[4776]: I1011 10:55:04.915938 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-swift-storage-0\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.916119 master-2 kubenswrapper[4776]: I1011 10:55:04.915976 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916119 master-2 kubenswrapper[4776]: I1011 10:55:04.916023 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-combined-ca-bundle\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916119 master-2 kubenswrapper[4776]: I1011 10:55:04.916055 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-nb\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.916119 master-2 kubenswrapper[4776]: I1011 10:55:04.916085 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-scripts\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916398 master-2 kubenswrapper[4776]: I1011 10:55:04.916123 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data-custom\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916398 master-2 kubenswrapper[4776]: I1011 10:55:04.916167 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e99b787-4e9b-4285-b175-63008b7e39de-logs\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916398 master-2 kubenswrapper[4776]: I1011 10:55:04.916212 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e99b787-4e9b-4285-b175-63008b7e39de-etc-machine-id\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916633 master-2 kubenswrapper[4776]: I1011 10:55:04.916577 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4jjl\" (UniqueName: \"kubernetes.io/projected/941fb918-e4b8-4ef7-9ad1-9af907c5593a-kube-api-access-k4jjl\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.916711 master-2 kubenswrapper[4776]: I1011 10:55:04.916646 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqgnt\" (UniqueName: \"kubernetes.io/projected/7e99b787-4e9b-4285-b175-63008b7e39de-kube-api-access-cqgnt\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:04.916769 master-2 kubenswrapper[4776]: I1011 10:55:04.916752 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-sb\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.916832 master-2 kubenswrapper[4776]: I1011 10:55:04.916809 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-svc\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.917503 master-2 kubenswrapper[4776]: I1011 10:55:04.917470 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-nb\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.918104 master-2 kubenswrapper[4776]: I1011 10:55:04.918066 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-sb\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.918310 master-2 kubenswrapper[4776]: I1011 10:55:04.918279 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-config\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.918516 master-2 kubenswrapper[4776]: I1011 10:55:04.918480 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-swift-storage-0\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.920844 master-2 kubenswrapper[4776]: I1011 10:55:04.920809 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-svc\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:04.940049 master-2 kubenswrapper[4776]: I1011 10:55:04.939250 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4jjl\" (UniqueName: \"kubernetes.io/projected/941fb918-e4b8-4ef7-9ad1-9af907c5593a-kube-api-access-k4jjl\") pod \"dnsmasq-dns-9f6b86c79-52ppr\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:05.019075 master-2 kubenswrapper[4776]: I1011 10:55:05.019012 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019208 master-2 kubenswrapper[4776]: I1011 10:55:05.019096 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-combined-ca-bundle\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019208 master-2 kubenswrapper[4776]: I1011 10:55:05.019144 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-scripts\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019208 master-2 kubenswrapper[4776]: I1011 10:55:05.019187 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data-custom\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019336 master-2 kubenswrapper[4776]: I1011 10:55:05.019232 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e99b787-4e9b-4285-b175-63008b7e39de-logs\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019336 master-2 kubenswrapper[4776]: I1011 10:55:05.019266 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e99b787-4e9b-4285-b175-63008b7e39de-etc-machine-id\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.019336 master-2 kubenswrapper[4776]: I1011 10:55:05.019312 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqgnt\" (UniqueName: \"kubernetes.io/projected/7e99b787-4e9b-4285-b175-63008b7e39de-kube-api-access-cqgnt\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.022717 master-2 kubenswrapper[4776]: I1011 10:55:05.020835 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e99b787-4e9b-4285-b175-63008b7e39de-etc-machine-id\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.022717 master-2 kubenswrapper[4776]: I1011 10:55:05.021094 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e99b787-4e9b-4285-b175-63008b7e39de-logs\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.026886 master-2 kubenswrapper[4776]: I1011 10:55:05.026839 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-combined-ca-bundle\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.034182 master-2 kubenswrapper[4776]: I1011 10:55:05.027618 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.041927 master-2 kubenswrapper[4776]: I1011 10:55:05.041894 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-scripts\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.053640 master-2 kubenswrapper[4776]: I1011 10:55:05.053595 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data-custom\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.056831 master-2 kubenswrapper[4776]: W1011 10:55:05.056767 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e1008bd_5444_4490_8e34_8a7843bf5c45.slice/crio-b44ccb387136d6d7afdc04ea61ce79bb899e6fab306823599152bcb3ecd66c0a WatchSource:0}: Error finding container b44ccb387136d6d7afdc04ea61ce79bb899e6fab306823599152bcb3ecd66c0a: Status 404 returned error can't find the container with id b44ccb387136d6d7afdc04ea61ce79bb899e6fab306823599152bcb3ecd66c0a Oct 11 10:55:05.057063 master-2 kubenswrapper[4776]: I1011 10:55:05.057007 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:05.068809 master-2 kubenswrapper[4776]: I1011 10:55:05.068263 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqgnt\" (UniqueName: \"kubernetes.io/projected/7e99b787-4e9b-4285-b175-63008b7e39de-kube-api-access-cqgnt\") pod \"cinder-b5802-api-2\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.091346 master-2 kubenswrapper[4776]: I1011 10:55:05.091311 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:05.113913 master-2 kubenswrapper[4776]: I1011 10:55:05.113867 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:05.377977 master-2 kubenswrapper[4776]: I1011 10:55:05.377916 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:55:05.392196 master-2 kubenswrapper[4776]: I1011 10:55:05.392163 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b597cbbf8-8sdfz" Oct 11 10:55:05.789909 master-2 kubenswrapper[4776]: I1011 10:55:05.789863 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f6b86c79-52ppr"] Oct 11 10:55:05.818237 master-2 kubenswrapper[4776]: W1011 10:55:05.817169 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod941fb918_e4b8_4ef7_9ad1_9af907c5593a.slice/crio-512e94719fefec26efbc4a52f6f8eafa4db87fb35a6caaa6e2a7c8012427af32 WatchSource:0}: Error finding container 512e94719fefec26efbc4a52f6f8eafa4db87fb35a6caaa6e2a7c8012427af32: Status 404 returned error can't find the container with id 512e94719fefec26efbc4a52f6f8eafa4db87fb35a6caaa6e2a7c8012427af32 Oct 11 10:55:05.885524 master-2 kubenswrapper[4776]: I1011 10:55:05.885425 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" event={"ID":"941fb918-e4b8-4ef7-9ad1-9af907c5593a","Type":"ContainerStarted","Data":"512e94719fefec26efbc4a52f6f8eafa4db87fb35a6caaa6e2a7c8012427af32"} Oct 11 10:55:05.910220 master-2 kubenswrapper[4776]: I1011 10:55:05.910095 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"6e1008bd-5444-4490-8e34-8a7843bf5c45","Type":"ContainerStarted","Data":"b44ccb387136d6d7afdc04ea61ce79bb899e6fab306823599152bcb3ecd66c0a"} Oct 11 10:55:05.935782 master-2 kubenswrapper[4776]: I1011 10:55:05.935241 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7887b79bcd-4lcts"] Oct 11 10:55:05.955556 master-2 kubenswrapper[4776]: W1011 10:55:05.955483 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60f1a3e8_20d2_48e9_842c_9312ce07efe0.slice/crio-c3c06a80f0b059f9f2526f170c4bc91b415604fc699cd4abb9acf3dd95970a4b WatchSource:0}: Error finding container c3c06a80f0b059f9f2526f170c4bc91b415604fc699cd4abb9acf3dd95970a4b: Status 404 returned error can't find the container with id c3c06a80f0b059f9f2526f170c4bc91b415604fc699cd4abb9acf3dd95970a4b Oct 11 10:55:05.960791 master-2 kubenswrapper[4776]: I1011 10:55:05.960762 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:06.078201 master-2 kubenswrapper[4776]: W1011 10:55:06.078154 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e99b787_4e9b_4285_b175_63008b7e39de.slice/crio-3fa25e1670b8b5d447679f8aa5304db6a9e223ba597842d1098f20a3ef6c774c WatchSource:0}: Error finding container 3fa25e1670b8b5d447679f8aa5304db6a9e223ba597842d1098f20a3ef6c774c: Status 404 returned error can't find the container with id 3fa25e1670b8b5d447679f8aa5304db6a9e223ba597842d1098f20a3ef6c774c Oct 11 10:55:06.258032 master-2 kubenswrapper[4776]: W1011 10:55:06.257973 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64da8a05_f383_4643_b08d_639963f8bdd5.slice/crio-f08b0710ef4008f1cdc3f5be94ecdbead3ce33922cbd677781666c53de5eb3c1 WatchSource:0}: Error finding container f08b0710ef4008f1cdc3f5be94ecdbead3ce33922cbd677781666c53de5eb3c1: Status 404 returned error can't find the container with id f08b0710ef4008f1cdc3f5be94ecdbead3ce33922cbd677781666c53de5eb3c1 Oct 11 10:55:06.284383 master-2 kubenswrapper[4776]: I1011 10:55:06.284284 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:06.921614 master-2 kubenswrapper[4776]: I1011 10:55:06.921548 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"7e99b787-4e9b-4285-b175-63008b7e39de","Type":"ContainerStarted","Data":"0547089e6afc3eb60691c0ebbe3c41ee9104f9a77e690771e776160cdc0930fb"} Oct 11 10:55:06.921614 master-2 kubenswrapper[4776]: I1011 10:55:06.921604 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"7e99b787-4e9b-4285-b175-63008b7e39de","Type":"ContainerStarted","Data":"3fa25e1670b8b5d447679f8aa5304db6a9e223ba597842d1098f20a3ef6c774c"} Oct 11 10:55:06.922923 master-2 kubenswrapper[4776]: I1011 10:55:06.922873 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"64da8a05-f383-4643-b08d-639963f8bdd5","Type":"ContainerStarted","Data":"f08b0710ef4008f1cdc3f5be94ecdbead3ce33922cbd677781666c53de5eb3c1"} Oct 11 10:55:06.925540 master-2 kubenswrapper[4776]: I1011 10:55:06.925469 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-4lcts" event={"ID":"60f1a3e8-20d2-48e9-842c-9312ce07efe0","Type":"ContainerStarted","Data":"2c24e9bb8cd753d61f312b08264059f0ef167df16dddaf2f79133f8b02212dea"} Oct 11 10:55:06.925540 master-2 kubenswrapper[4776]: I1011 10:55:06.925505 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-4lcts" event={"ID":"60f1a3e8-20d2-48e9-842c-9312ce07efe0","Type":"ContainerStarted","Data":"a8ba91b5b068d25618fa0c0a4315f75f1ab1929925bf68853e336e2a934b0ca6"} Oct 11 10:55:06.925540 master-2 kubenswrapper[4776]: I1011 10:55:06.925520 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-4lcts" event={"ID":"60f1a3e8-20d2-48e9-842c-9312ce07efe0","Type":"ContainerStarted","Data":"c3c06a80f0b059f9f2526f170c4bc91b415604fc699cd4abb9acf3dd95970a4b"} Oct 11 10:55:06.925826 master-2 kubenswrapper[4776]: I1011 10:55:06.925798 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:06.929058 master-2 kubenswrapper[4776]: I1011 10:55:06.929017 4776 generic.go:334] "Generic (PLEG): container finished" podID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerID="164c282461ad0735fec232ffd6ba306dca0fc05dcf0a0b68a9bb53e9c6e9c07c" exitCode=0 Oct 11 10:55:06.929416 master-2 kubenswrapper[4776]: I1011 10:55:06.929082 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" event={"ID":"941fb918-e4b8-4ef7-9ad1-9af907c5593a","Type":"ContainerDied","Data":"164c282461ad0735fec232ffd6ba306dca0fc05dcf0a0b68a9bb53e9c6e9c07c"} Oct 11 10:55:06.933251 master-2 kubenswrapper[4776]: I1011 10:55:06.933210 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"6e1008bd-5444-4490-8e34-8a7843bf5c45","Type":"ContainerStarted","Data":"5887930f9bcf6c6925fafe6faf9cfa4414c977579d10c2864ef8ca027b356b34"} Oct 11 10:55:06.967606 master-2 kubenswrapper[4776]: I1011 10:55:06.966401 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7887b79bcd-4lcts" podStartSLOduration=2.966381112 podStartE2EDuration="2.966381112s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:06.957550473 +0000 UTC m=+1741.741977182" watchObservedRunningTime="2025-10-11 10:55:06.966381112 +0000 UTC m=+1741.750807821" Oct 11 10:55:07.121927 master-2 kubenswrapper[4776]: I1011 10:55:07.118709 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:07.945192 master-2 kubenswrapper[4776]: I1011 10:55:07.945146 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"7e99b787-4e9b-4285-b175-63008b7e39de","Type":"ContainerStarted","Data":"d2f58a8e0319242a1b64a2af94772f7b9698c1eaa7f642a654e5e11cc2fe7f89"} Oct 11 10:55:07.945974 master-2 kubenswrapper[4776]: I1011 10:55:07.945298 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-api-2" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-b5802-api-log" containerID="cri-o://0547089e6afc3eb60691c0ebbe3c41ee9104f9a77e690771e776160cdc0930fb" gracePeriod=30 Oct 11 10:55:07.945974 master-2 kubenswrapper[4776]: I1011 10:55:07.945539 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:07.945974 master-2 kubenswrapper[4776]: I1011 10:55:07.945828 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-api-2" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-api" containerID="cri-o://d2f58a8e0319242a1b64a2af94772f7b9698c1eaa7f642a654e5e11cc2fe7f89" gracePeriod=30 Oct 11 10:55:07.949978 master-2 kubenswrapper[4776]: I1011 10:55:07.949942 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"64da8a05-f383-4643-b08d-639963f8bdd5","Type":"ContainerStarted","Data":"5510c48b8a1b349e6fccbe8283441e4880694e565021961bec519004a98d24f9"} Oct 11 10:55:07.950433 master-2 kubenswrapper[4776]: I1011 10:55:07.950413 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"64da8a05-f383-4643-b08d-639963f8bdd5","Type":"ContainerStarted","Data":"6cbb734e786ac7a270d6c6e63f2ba748d07f4398724c5368110f5b4040ec1204"} Oct 11 10:55:07.952172 master-2 kubenswrapper[4776]: I1011 10:55:07.952142 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" event={"ID":"941fb918-e4b8-4ef7-9ad1-9af907c5593a","Type":"ContainerStarted","Data":"8b6ad17ffd3f1f6962cd2f8e96a39bece7f7bf59db6192abf67589a9326a7807"} Oct 11 10:55:07.952289 master-2 kubenswrapper[4776]: I1011 10:55:07.952271 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:07.954847 master-2 kubenswrapper[4776]: I1011 10:55:07.954807 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"6e1008bd-5444-4490-8e34-8a7843bf5c45","Type":"ContainerStarted","Data":"664efc4ad947feadd97f79da92d3d8787041e164ebbb71c8dd6b636b6c939f3f"} Oct 11 10:55:07.989217 master-2 kubenswrapper[4776]: I1011 10:55:07.989110 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-api-2" podStartSLOduration=3.989094302 podStartE2EDuration="3.989094302s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:07.986788439 +0000 UTC m=+1742.771215148" watchObservedRunningTime="2025-10-11 10:55:07.989094302 +0000 UTC m=+1742.773521011" Oct 11 10:55:08.033366 master-2 kubenswrapper[4776]: I1011 10:55:08.033270 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-backup-0" podStartSLOduration=3.032780511 podStartE2EDuration="4.033243468s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="2025-10-11 10:55:06.263764942 +0000 UTC m=+1741.048191651" lastFinishedPulling="2025-10-11 10:55:07.264227899 +0000 UTC m=+1742.048654608" observedRunningTime="2025-10-11 10:55:08.03036358 +0000 UTC m=+1742.814790289" watchObservedRunningTime="2025-10-11 10:55:08.033243468 +0000 UTC m=+1742.817670177" Oct 11 10:55:08.109415 master-2 kubenswrapper[4776]: I1011 10:55:08.109116 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" podStartSLOduration=4.109091662 podStartE2EDuration="4.109091662s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:08.068158973 +0000 UTC m=+1742.852585692" watchObservedRunningTime="2025-10-11 10:55:08.109091662 +0000 UTC m=+1742.893518371" Oct 11 10:55:08.989460 master-2 kubenswrapper[4776]: I1011 10:55:08.987389 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-848fcbb4df-54n8l" Oct 11 10:55:08.991712 master-2 kubenswrapper[4776]: I1011 10:55:08.991628 4776 generic.go:334] "Generic (PLEG): container finished" podID="7e99b787-4e9b-4285-b175-63008b7e39de" containerID="0547089e6afc3eb60691c0ebbe3c41ee9104f9a77e690771e776160cdc0930fb" exitCode=143 Oct 11 10:55:08.994449 master-2 kubenswrapper[4776]: I1011 10:55:08.992830 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"7e99b787-4e9b-4285-b175-63008b7e39de","Type":"ContainerDied","Data":"0547089e6afc3eb60691c0ebbe3c41ee9104f9a77e690771e776160cdc0930fb"} Oct 11 10:55:09.013946 master-2 kubenswrapper[4776]: I1011 10:55:09.013825 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-scheduler-0" podStartSLOduration=3.9476782999999998 podStartE2EDuration="5.013802196s" podCreationTimestamp="2025-10-11 10:55:04 +0000 UTC" firstStartedPulling="2025-10-11 10:55:05.065410775 +0000 UTC m=+1739.849837484" lastFinishedPulling="2025-10-11 10:55:06.131534671 +0000 UTC m=+1740.915961380" observedRunningTime="2025-10-11 10:55:08.107442797 +0000 UTC m=+1742.891869506" watchObservedRunningTime="2025-10-11 10:55:09.013802196 +0000 UTC m=+1743.798228905" Oct 11 10:55:09.522931 master-2 kubenswrapper[4776]: I1011 10:55:09.520445 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:09.859480 master-2 kubenswrapper[4776]: I1011 10:55:09.859351 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:11.010215 master-2 kubenswrapper[4776]: I1011 10:55:11.010149 4776 generic.go:334] "Generic (PLEG): container finished" podID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerID="a51a44def94d3717c345488e50d0f116c0777018eb329e198859a513f723b71f" exitCode=0 Oct 11 10:55:11.010713 master-2 kubenswrapper[4776]: I1011 10:55:11.010230 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-n7nm2" event={"ID":"4a1c4d38-1f25-4465-9976-43be28a3b282","Type":"ContainerDied","Data":"a51a44def94d3717c345488e50d0f116c0777018eb329e198859a513f723b71f"} Oct 11 10:55:11.802245 master-2 kubenswrapper[4776]: I1011 10:55:11.802109 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 11 10:55:11.804057 master-2 kubenswrapper[4776]: I1011 10:55:11.803868 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 11 10:55:11.807479 master-2 kubenswrapper[4776]: I1011 10:55:11.807428 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 11 10:55:11.807787 master-2 kubenswrapper[4776]: I1011 10:55:11.807754 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 11 10:55:11.899640 master-2 kubenswrapper[4776]: I1011 10:55:11.879986 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 11 10:55:11.899640 master-2 kubenswrapper[4776]: I1011 10:55:11.891903 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c1271fdd-4436-4935-b271-89ffa5394bc3-openstack-config\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.899640 master-2 kubenswrapper[4776]: I1011 10:55:11.891956 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2vs\" (UniqueName: \"kubernetes.io/projected/c1271fdd-4436-4935-b271-89ffa5394bc3-kube-api-access-gs2vs\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.899640 master-2 kubenswrapper[4776]: I1011 10:55:11.892052 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1271fdd-4436-4935-b271-89ffa5394bc3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.899640 master-2 kubenswrapper[4776]: I1011 10:55:11.892120 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c1271fdd-4436-4935-b271-89ffa5394bc3-openstack-config-secret\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.993695 master-2 kubenswrapper[4776]: I1011 10:55:11.993628 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs2vs\" (UniqueName: \"kubernetes.io/projected/c1271fdd-4436-4935-b271-89ffa5394bc3-kube-api-access-gs2vs\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.993977 master-2 kubenswrapper[4776]: I1011 10:55:11.993734 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1271fdd-4436-4935-b271-89ffa5394bc3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.993977 master-2 kubenswrapper[4776]: I1011 10:55:11.993823 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c1271fdd-4436-4935-b271-89ffa5394bc3-openstack-config-secret\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.993977 master-2 kubenswrapper[4776]: I1011 10:55:11.993869 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c1271fdd-4436-4935-b271-89ffa5394bc3-openstack-config\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.994717 master-2 kubenswrapper[4776]: I1011 10:55:11.994693 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c1271fdd-4436-4935-b271-89ffa5394bc3-openstack-config\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.997239 master-2 kubenswrapper[4776]: I1011 10:55:11.997166 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c1271fdd-4436-4935-b271-89ffa5394bc3-openstack-config-secret\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:11.998374 master-2 kubenswrapper[4776]: I1011 10:55:11.998342 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1271fdd-4436-4935-b271-89ffa5394bc3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:12.014276 master-2 kubenswrapper[4776]: I1011 10:55:12.014227 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs2vs\" (UniqueName: \"kubernetes.io/projected/c1271fdd-4436-4935-b271-89ffa5394bc3-kube-api-access-gs2vs\") pod \"openstackclient\" (UID: \"c1271fdd-4436-4935-b271-89ffa5394bc3\") " pod="openstack/openstackclient" Oct 11 10:55:12.138775 master-2 kubenswrapper[4776]: I1011 10:55:12.138270 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 11 10:55:12.780705 master-2 kubenswrapper[4776]: I1011 10:55:12.780600 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 11 10:55:12.838216 master-2 kubenswrapper[4776]: W1011 10:55:12.838156 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1271fdd_4436_4935_b271_89ffa5394bc3.slice/crio-3056a9832e3a1c96181b7292d686779874ea6456c058dee4caaac1a208cd2208 WatchSource:0}: Error finding container 3056a9832e3a1c96181b7292d686779874ea6456c058dee4caaac1a208cd2208: Status 404 returned error can't find the container with id 3056a9832e3a1c96181b7292d686779874ea6456c058dee4caaac1a208cd2208 Oct 11 10:55:12.846741 master-2 kubenswrapper[4776]: I1011 10:55:12.846702 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:55:12.881414 master-2 kubenswrapper[4776]: I1011 10:55:12.881361 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:55:12.921439 master-2 kubenswrapper[4776]: I1011 10:55:12.921388 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4a1c4d38-1f25-4465-9976-43be28a3b282-etc-podinfo\") pod \"4a1c4d38-1f25-4465-9976-43be28a3b282\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " Oct 11 10:55:12.921883 master-2 kubenswrapper[4776]: I1011 10:55:12.921852 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-combined-ca-bundle\") pod \"4a1c4d38-1f25-4465-9976-43be28a3b282\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " Oct 11 10:55:12.922033 master-2 kubenswrapper[4776]: I1011 10:55:12.922002 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data-merged\") pod \"4a1c4d38-1f25-4465-9976-43be28a3b282\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " Oct 11 10:55:12.922079 master-2 kubenswrapper[4776]: I1011 10:55:12.922041 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data\") pod \"4a1c4d38-1f25-4465-9976-43be28a3b282\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " Oct 11 10:55:12.922289 master-2 kubenswrapper[4776]: I1011 10:55:12.922260 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh8v5\" (UniqueName: \"kubernetes.io/projected/4a1c4d38-1f25-4465-9976-43be28a3b282-kube-api-access-fh8v5\") pod \"4a1c4d38-1f25-4465-9976-43be28a3b282\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " Oct 11 10:55:12.922332 master-2 kubenswrapper[4776]: I1011 10:55:12.922292 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-scripts\") pod \"4a1c4d38-1f25-4465-9976-43be28a3b282\" (UID: \"4a1c4d38-1f25-4465-9976-43be28a3b282\") " Oct 11 10:55:12.924103 master-2 kubenswrapper[4776]: I1011 10:55:12.923713 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "4a1c4d38-1f25-4465-9976-43be28a3b282" (UID: "4a1c4d38-1f25-4465-9976-43be28a3b282"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:12.925197 master-2 kubenswrapper[4776]: I1011 10:55:12.925173 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4a1c4d38-1f25-4465-9976-43be28a3b282-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "4a1c4d38-1f25-4465-9976-43be28a3b282" (UID: "4a1c4d38-1f25-4465-9976-43be28a3b282"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 11 10:55:12.927954 master-2 kubenswrapper[4776]: I1011 10:55:12.927895 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1c4d38-1f25-4465-9976-43be28a3b282-kube-api-access-fh8v5" (OuterVolumeSpecName: "kube-api-access-fh8v5") pod "4a1c4d38-1f25-4465-9976-43be28a3b282" (UID: "4a1c4d38-1f25-4465-9976-43be28a3b282"). InnerVolumeSpecName "kube-api-access-fh8v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:12.929144 master-2 kubenswrapper[4776]: I1011 10:55:12.929080 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-scripts" (OuterVolumeSpecName: "scripts") pod "4a1c4d38-1f25-4465-9976-43be28a3b282" (UID: "4a1c4d38-1f25-4465-9976-43be28a3b282"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:12.948347 master-2 kubenswrapper[4776]: I1011 10:55:12.944067 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data" (OuterVolumeSpecName: "config-data") pod "4a1c4d38-1f25-4465-9976-43be28a3b282" (UID: "4a1c4d38-1f25-4465-9976-43be28a3b282"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:12.975264 master-2 kubenswrapper[4776]: I1011 10:55:12.975195 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a1c4d38-1f25-4465-9976-43be28a3b282" (UID: "4a1c4d38-1f25-4465-9976-43be28a3b282"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:13.031986 master-2 kubenswrapper[4776]: I1011 10:55:13.031931 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh8v5\" (UniqueName: \"kubernetes.io/projected/4a1c4d38-1f25-4465-9976-43be28a3b282-kube-api-access-fh8v5\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:13.031986 master-2 kubenswrapper[4776]: I1011 10:55:13.031974 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:13.031986 master-2 kubenswrapper[4776]: I1011 10:55:13.031984 4776 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4a1c4d38-1f25-4465-9976-43be28a3b282-etc-podinfo\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:13.031986 master-2 kubenswrapper[4776]: I1011 10:55:13.031992 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:13.032537 master-2 kubenswrapper[4776]: I1011 10:55:13.032004 4776 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data-merged\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:13.032537 master-2 kubenswrapper[4776]: I1011 10:55:13.032012 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1c4d38-1f25-4465-9976-43be28a3b282-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:13.043773 master-2 kubenswrapper[4776]: I1011 10:55:13.042006 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-n7nm2" event={"ID":"4a1c4d38-1f25-4465-9976-43be28a3b282","Type":"ContainerDied","Data":"08c94658c701837f72a72b62e6464a19fa3e877f799281df15d5cc453d495789"} Oct 11 10:55:13.043773 master-2 kubenswrapper[4776]: I1011 10:55:13.042052 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08c94658c701837f72a72b62e6464a19fa3e877f799281df15d5cc453d495789" Oct 11 10:55:13.043773 master-2 kubenswrapper[4776]: I1011 10:55:13.042131 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-n7nm2" Oct 11 10:55:13.047714 master-2 kubenswrapper[4776]: I1011 10:55:13.047640 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c1271fdd-4436-4935-b271-89ffa5394bc3","Type":"ContainerStarted","Data":"3056a9832e3a1c96181b7292d686779874ea6456c058dee4caaac1a208cd2208"} Oct 11 10:55:13.985353 master-2 kubenswrapper[4776]: I1011 10:55:13.985290 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:55:13.985639 master-2 kubenswrapper[4776]: I1011 10:55:13.985558 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-0" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-log" containerID="cri-o://c0a0f68535f1045164a03ee0e1499295237f65e65bc92ffb7cde06bc73007d4d" gracePeriod=30 Oct 11 10:55:13.986603 master-2 kubenswrapper[4776]: I1011 10:55:13.985998 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-0" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-httpd" containerID="cri-o://30c682488fe3a4cbd0fa8fdf8a635610245b1344157964f363074a15ace75969" gracePeriod=30 Oct 11 10:55:14.771656 master-2 kubenswrapper[4776]: I1011 10:55:14.771606 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:14.983850 master-2 kubenswrapper[4776]: I1011 10:55:14.979425 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:15.085857 master-2 kubenswrapper[4776]: I1011 10:55:15.085759 4776 generic.go:334] "Generic (PLEG): container finished" podID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerID="c0a0f68535f1045164a03ee0e1499295237f65e65bc92ffb7cde06bc73007d4d" exitCode=143 Oct 11 10:55:15.087267 master-2 kubenswrapper[4776]: I1011 10:55:15.087054 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-scheduler-0" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="probe" containerID="cri-o://664efc4ad947feadd97f79da92d3d8787041e164ebbb71c8dd6b636b6c939f3f" gracePeriod=30 Oct 11 10:55:15.087463 master-2 kubenswrapper[4776]: I1011 10:55:15.087426 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"d0cc3965-fd34-4442-b408-a5ae441443e4","Type":"ContainerDied","Data":"c0a0f68535f1045164a03ee0e1499295237f65e65bc92ffb7cde06bc73007d4d"} Oct 11 10:55:15.087522 master-2 kubenswrapper[4776]: I1011 10:55:15.087476 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-scheduler-0" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="cinder-scheduler" containerID="cri-o://5887930f9bcf6c6925fafe6faf9cfa4414c977579d10c2864ef8ca027b356b34" gracePeriod=30 Oct 11 10:55:15.094201 master-2 kubenswrapper[4776]: I1011 10:55:15.093980 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:15.096045 master-2 kubenswrapper[4776]: I1011 10:55:15.095841 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:15.167417 master-2 kubenswrapper[4776]: I1011 10:55:15.167333 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f6b86c79-52ppr"] Oct 11 10:55:15.178439 master-2 kubenswrapper[4776]: I1011 10:55:15.177895 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:15.733654 master-2 kubenswrapper[4776]: I1011 10:55:15.733587 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-66dfdcbff8-j4jhs"] Oct 11 10:55:15.734080 master-2 kubenswrapper[4776]: E1011 10:55:15.733943 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerName="ironic-db-sync" Oct 11 10:55:15.734080 master-2 kubenswrapper[4776]: I1011 10:55:15.733956 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerName="ironic-db-sync" Oct 11 10:55:15.734080 master-2 kubenswrapper[4776]: E1011 10:55:15.733967 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerName="init" Oct 11 10:55:15.734080 master-2 kubenswrapper[4776]: I1011 10:55:15.733976 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerName="init" Oct 11 10:55:15.734732 master-2 kubenswrapper[4776]: I1011 10:55:15.734117 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1c4d38-1f25-4465-9976-43be28a3b282" containerName="ironic-db-sync" Oct 11 10:55:15.735050 master-2 kubenswrapper[4776]: I1011 10:55:15.735030 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.747825 master-2 kubenswrapper[4776]: I1011 10:55:15.744153 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 11 10:55:15.747825 master-2 kubenswrapper[4776]: I1011 10:55:15.744375 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 11 10:55:15.747825 master-2 kubenswrapper[4776]: I1011 10:55:15.744548 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 11 10:55:15.747825 master-2 kubenswrapper[4776]: I1011 10:55:15.744747 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 11 10:55:15.747825 master-2 kubenswrapper[4776]: I1011 10:55:15.744890 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 11 10:55:15.753405 master-2 kubenswrapper[4776]: I1011 10:55:15.753343 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-66dfdcbff8-j4jhs"] Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801242 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-log-httpd\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801328 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-run-httpd\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801361 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-etc-swift\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801438 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-combined-ca-bundle\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801500 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-config-data\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801576 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw848\" (UniqueName: \"kubernetes.io/projected/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-kube-api-access-zw848\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801649 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-internal-tls-certs\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.804733 master-2 kubenswrapper[4776]: I1011 10:55:15.801741 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-public-tls-certs\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.900984 master-2 kubenswrapper[4776]: I1011 10:55:15.900923 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-86bdd47775-gpz8z"] Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.902107 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903019 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw848\" (UniqueName: \"kubernetes.io/projected/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-kube-api-access-zw848\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903079 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-internal-tls-certs\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903119 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-public-tls-certs\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903197 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-log-httpd\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903214 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-run-httpd\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903230 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-etc-swift\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903260 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-combined-ca-bundle\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903282 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-config-data\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.903956 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-run-httpd\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.904269 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-log-httpd\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.904894 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.905101 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.908389 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-internal-tls-certs\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.910938 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-public-tls-certs\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.911155 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-etc-swift\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.918725 master-2 kubenswrapper[4776]: I1011 10:55:15.913185 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-config-data\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.944432 master-2 kubenswrapper[4776]: I1011 10:55:15.944387 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-86bdd47775-gpz8z"] Oct 11 10:55:15.953537 master-2 kubenswrapper[4776]: I1011 10:55:15.952915 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw848\" (UniqueName: \"kubernetes.io/projected/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-kube-api-access-zw848\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:15.992741 master-2 kubenswrapper[4776]: I1011 10:55:15.990771 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f57d76b-0f9f-41bd-906b-dac5a7e5a986-combined-ca-bundle\") pod \"swift-proxy-66dfdcbff8-j4jhs\" (UID: \"0f57d76b-0f9f-41bd-906b-dac5a7e5a986\") " pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:16.005962 master-2 kubenswrapper[4776]: I1011 10:55:16.005911 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data-custom\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.006134 master-2 kubenswrapper[4776]: I1011 10:55:16.005973 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-combined-ca-bundle\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.006134 master-2 kubenswrapper[4776]: I1011 10:55:16.006035 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kschj\" (UniqueName: \"kubernetes.io/projected/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-kube-api-access-kschj\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.006134 master-2 kubenswrapper[4776]: I1011 10:55:16.006060 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.081863 master-2 kubenswrapper[4776]: I1011 10:55:16.081096 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:16.107069 master-2 kubenswrapper[4776]: I1011 10:55:16.106998 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kschj\" (UniqueName: \"kubernetes.io/projected/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-kube-api-access-kschj\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.107270 master-2 kubenswrapper[4776]: I1011 10:55:16.107100 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.107270 master-2 kubenswrapper[4776]: I1011 10:55:16.107239 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data-custom\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.107370 master-2 kubenswrapper[4776]: I1011 10:55:16.107295 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-combined-ca-bundle\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.110938 master-2 kubenswrapper[4776]: I1011 10:55:16.110902 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.126725 master-2 kubenswrapper[4776]: I1011 10:55:16.112215 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-combined-ca-bundle\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.126725 master-2 kubenswrapper[4776]: I1011 10:55:16.113647 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data-custom\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.133778 4776 generic.go:334] "Generic (PLEG): container finished" podID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerID="664efc4ad947feadd97f79da92d3d8787041e164ebbb71c8dd6b636b6c939f3f" exitCode=0 Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.133816 4776 generic.go:334] "Generic (PLEG): container finished" podID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerID="5887930f9bcf6c6925fafe6faf9cfa4414c977579d10c2864ef8ca027b356b34" exitCode=0 Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.133991 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerName="dnsmasq-dns" containerID="cri-o://8b6ad17ffd3f1f6962cd2f8e96a39bece7f7bf59db6192abf67589a9326a7807" gracePeriod=10 Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.134064 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"6e1008bd-5444-4490-8e34-8a7843bf5c45","Type":"ContainerDied","Data":"664efc4ad947feadd97f79da92d3d8787041e164ebbb71c8dd6b636b6c939f3f"} Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.134091 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"6e1008bd-5444-4490-8e34-8a7843bf5c45","Type":"ContainerDied","Data":"5887930f9bcf6c6925fafe6faf9cfa4414c977579d10c2864ef8ca027b356b34"} Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.134211 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-backup-0" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="cinder-backup" containerID="cri-o://6cbb734e786ac7a270d6c6e63f2ba748d07f4398724c5368110f5b4040ec1204" gracePeriod=30 Oct 11 10:55:16.135725 master-2 kubenswrapper[4776]: I1011 10:55:16.134834 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b5802-backup-0" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="probe" containerID="cri-o://5510c48b8a1b349e6fccbe8283441e4880694e565021961bec519004a98d24f9" gracePeriod=30 Oct 11 10:55:16.147928 master-2 kubenswrapper[4776]: I1011 10:55:16.139271 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kschj\" (UniqueName: \"kubernetes.io/projected/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-kube-api-access-kschj\") pod \"heat-engine-86bdd47775-gpz8z\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.328697 master-2 kubenswrapper[4776]: I1011 10:55:16.328233 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:16.859398 master-2 kubenswrapper[4776]: I1011 10:55:16.859291 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Oct 11 10:55:16.875115 master-2 kubenswrapper[4776]: I1011 10:55:16.874981 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Oct 11 10:55:16.883559 master-2 kubenswrapper[4776]: I1011 10:55:16.881897 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Oct 11 10:55:16.883559 master-2 kubenswrapper[4776]: I1011 10:55:16.882092 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Oct 11 10:55:16.883559 master-2 kubenswrapper[4776]: I1011 10:55:16.882093 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Oct 11 10:55:16.883559 master-2 kubenswrapper[4776]: I1011 10:55:16.882301 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Oct 11 10:55:16.901372 master-2 kubenswrapper[4776]: I1011 10:55:16.899378 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.030911 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-scripts\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031031 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031060 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5fth\" (UniqueName: \"kubernetes.io/projected/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-kube-api-access-x5fth\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031106 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031138 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031161 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031221 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-72cf197d-b530-4337-9ce7-c4684efc1643\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29100426-23f3-402d-9fda-ad3e2743ec8a\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.031762 master-2 kubenswrapper[4776]: I1011 10:55:17.031284 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.105771 master-2 kubenswrapper[4776]: I1011 10:55:17.105718 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.132767 master-2 kubenswrapper[4776]: I1011 10:55:17.132701 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-scripts\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132791 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132811 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5fth\" (UniqueName: \"kubernetes.io/projected/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-kube-api-access-x5fth\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132841 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132865 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132932 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-72cf197d-b530-4337-9ce7-c4684efc1643\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29100426-23f3-402d-9fda-ad3e2743ec8a\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.132997 master-2 kubenswrapper[4776]: I1011 10:55:17.132962 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.134271 master-2 kubenswrapper[4776]: I1011 10:55:17.133961 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.135294 master-2 kubenswrapper[4776]: I1011 10:55:17.135268 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:55:17.135371 master-2 kubenswrapper[4776]: I1011 10:55:17.135303 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-72cf197d-b530-4337-9ce7-c4684efc1643\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29100426-23f3-402d-9fda-ad3e2743ec8a\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/ab2d09fe8f862871592cfbd594d467b006a80d82a58168e1cd7a1c526a517195/globalmount\"" pod="openstack/ironic-conductor-0" Oct 11 10:55:17.136372 master-2 kubenswrapper[4776]: I1011 10:55:17.136286 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.136882 master-2 kubenswrapper[4776]: I1011 10:55:17.136864 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.137157 master-2 kubenswrapper[4776]: I1011 10:55:17.137097 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-config-data\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.138363 master-2 kubenswrapper[4776]: I1011 10:55:17.138284 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.151806 master-2 kubenswrapper[4776]: I1011 10:55:17.151704 4776 generic.go:334] "Generic (PLEG): container finished" podID="64da8a05-f383-4643-b08d-639963f8bdd5" containerID="5510c48b8a1b349e6fccbe8283441e4880694e565021961bec519004a98d24f9" exitCode=0 Oct 11 10:55:17.152666 master-2 kubenswrapper[4776]: I1011 10:55:17.151805 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"64da8a05-f383-4643-b08d-639963f8bdd5","Type":"ContainerDied","Data":"5510c48b8a1b349e6fccbe8283441e4880694e565021961bec519004a98d24f9"} Oct 11 10:55:17.155546 master-2 kubenswrapper[4776]: I1011 10:55:17.155488 4776 generic.go:334] "Generic (PLEG): container finished" podID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerID="8b6ad17ffd3f1f6962cd2f8e96a39bece7f7bf59db6192abf67589a9326a7807" exitCode=0 Oct 11 10:55:17.155705 master-2 kubenswrapper[4776]: I1011 10:55:17.155636 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" event={"ID":"941fb918-e4b8-4ef7-9ad1-9af907c5593a","Type":"ContainerDied","Data":"8b6ad17ffd3f1f6962cd2f8e96a39bece7f7bf59db6192abf67589a9326a7807"} Oct 11 10:55:17.155997 master-2 kubenswrapper[4776]: I1011 10:55:17.155958 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-scripts\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.164580 master-2 kubenswrapper[4776]: I1011 10:55:17.164535 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"6e1008bd-5444-4490-8e34-8a7843bf5c45","Type":"ContainerDied","Data":"b44ccb387136d6d7afdc04ea61ce79bb899e6fab306823599152bcb3ecd66c0a"} Oct 11 10:55:17.164948 master-2 kubenswrapper[4776]: I1011 10:55:17.164599 4776 scope.go:117] "RemoveContainer" containerID="664efc4ad947feadd97f79da92d3d8787041e164ebbb71c8dd6b636b6c939f3f" Oct 11 10:55:17.164948 master-2 kubenswrapper[4776]: I1011 10:55:17.164811 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.167383 master-2 kubenswrapper[4776]: I1011 10:55:17.166968 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5fth\" (UniqueName: \"kubernetes.io/projected/98ff7c8d-cc7c-4b25-917b-88dfa7f837c5-kube-api-access-x5fth\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:17.234389 master-2 kubenswrapper[4776]: I1011 10:55:17.234332 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data-custom\") pod \"6e1008bd-5444-4490-8e34-8a7843bf5c45\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " Oct 11 10:55:17.234821 master-2 kubenswrapper[4776]: I1011 10:55:17.234783 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-combined-ca-bundle\") pod \"6e1008bd-5444-4490-8e34-8a7843bf5c45\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " Oct 11 10:55:17.234890 master-2 kubenswrapper[4776]: I1011 10:55:17.234867 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e1008bd-5444-4490-8e34-8a7843bf5c45-etc-machine-id\") pod \"6e1008bd-5444-4490-8e34-8a7843bf5c45\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " Oct 11 10:55:17.234956 master-2 kubenswrapper[4776]: I1011 10:55:17.234934 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlmpd\" (UniqueName: \"kubernetes.io/projected/6e1008bd-5444-4490-8e34-8a7843bf5c45-kube-api-access-jlmpd\") pod \"6e1008bd-5444-4490-8e34-8a7843bf5c45\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " Oct 11 10:55:17.235004 master-2 kubenswrapper[4776]: I1011 10:55:17.234984 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data\") pod \"6e1008bd-5444-4490-8e34-8a7843bf5c45\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " Oct 11 10:55:17.235041 master-2 kubenswrapper[4776]: I1011 10:55:17.235014 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-scripts\") pod \"6e1008bd-5444-4490-8e34-8a7843bf5c45\" (UID: \"6e1008bd-5444-4490-8e34-8a7843bf5c45\") " Oct 11 10:55:17.235457 master-2 kubenswrapper[4776]: I1011 10:55:17.235395 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e1008bd-5444-4490-8e34-8a7843bf5c45-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6e1008bd-5444-4490-8e34-8a7843bf5c45" (UID: "6e1008bd-5444-4490-8e34-8a7843bf5c45"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:17.235964 master-2 kubenswrapper[4776]: I1011 10:55:17.235773 4776 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e1008bd-5444-4490-8e34-8a7843bf5c45-etc-machine-id\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.239401 master-2 kubenswrapper[4776]: I1011 10:55:17.239112 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6e1008bd-5444-4490-8e34-8a7843bf5c45" (UID: "6e1008bd-5444-4490-8e34-8a7843bf5c45"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:17.239536 master-2 kubenswrapper[4776]: I1011 10:55:17.239495 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e1008bd-5444-4490-8e34-8a7843bf5c45-kube-api-access-jlmpd" (OuterVolumeSpecName: "kube-api-access-jlmpd") pod "6e1008bd-5444-4490-8e34-8a7843bf5c45" (UID: "6e1008bd-5444-4490-8e34-8a7843bf5c45"). InnerVolumeSpecName "kube-api-access-jlmpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:17.241858 master-2 kubenswrapper[4776]: I1011 10:55:17.241820 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-scripts" (OuterVolumeSpecName: "scripts") pod "6e1008bd-5444-4490-8e34-8a7843bf5c45" (UID: "6e1008bd-5444-4490-8e34-8a7843bf5c45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:17.292915 master-2 kubenswrapper[4776]: I1011 10:55:17.292847 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e1008bd-5444-4490-8e34-8a7843bf5c45" (UID: "6e1008bd-5444-4490-8e34-8a7843bf5c45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:17.326075 master-2 kubenswrapper[4776]: I1011 10:55:17.325365 4776 scope.go:117] "RemoveContainer" containerID="5887930f9bcf6c6925fafe6faf9cfa4414c977579d10c2864ef8ca027b356b34" Oct 11 10:55:17.337481 master-2 kubenswrapper[4776]: I1011 10:55:17.337431 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.337481 master-2 kubenswrapper[4776]: I1011 10:55:17.337482 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlmpd\" (UniqueName: \"kubernetes.io/projected/6e1008bd-5444-4490-8e34-8a7843bf5c45-kube-api-access-jlmpd\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.337868 master-2 kubenswrapper[4776]: I1011 10:55:17.337499 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.337868 master-2 kubenswrapper[4776]: I1011 10:55:17.337513 4776 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data-custom\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.356848 master-2 kubenswrapper[4776]: I1011 10:55:17.356206 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-66dfdcbff8-j4jhs"] Oct 11 10:55:17.360171 master-2 kubenswrapper[4776]: I1011 10:55:17.359349 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data" (OuterVolumeSpecName: "config-data") pod "6e1008bd-5444-4490-8e34-8a7843bf5c45" (UID: "6e1008bd-5444-4490-8e34-8a7843bf5c45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:17.374977 master-2 kubenswrapper[4776]: I1011 10:55:17.374344 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-86bdd47775-gpz8z"] Oct 11 10:55:17.447414 master-2 kubenswrapper[4776]: I1011 10:55:17.445841 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1008bd-5444-4490-8e34-8a7843bf5c45-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.478882 master-2 kubenswrapper[4776]: I1011 10:55:17.478767 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:17.557528 master-2 kubenswrapper[4776]: I1011 10:55:17.557462 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:17.574333 master-2 kubenswrapper[4776]: I1011 10:55:17.573802 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:17.595076 master-2 kubenswrapper[4776]: I1011 10:55:17.594988 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:17.609564 master-2 kubenswrapper[4776]: I1011 10:55:17.609386 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:17.610078 master-2 kubenswrapper[4776]: E1011 10:55:17.610052 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerName="dnsmasq-dns" Oct 11 10:55:17.610142 master-2 kubenswrapper[4776]: I1011 10:55:17.610079 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerName="dnsmasq-dns" Oct 11 10:55:17.610142 master-2 kubenswrapper[4776]: E1011 10:55:17.610105 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="cinder-scheduler" Oct 11 10:55:17.610142 master-2 kubenswrapper[4776]: I1011 10:55:17.610115 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="cinder-scheduler" Oct 11 10:55:17.610281 master-2 kubenswrapper[4776]: E1011 10:55:17.610150 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerName="init" Oct 11 10:55:17.610281 master-2 kubenswrapper[4776]: I1011 10:55:17.610159 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerName="init" Oct 11 10:55:17.610281 master-2 kubenswrapper[4776]: E1011 10:55:17.610178 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="probe" Oct 11 10:55:17.610281 master-2 kubenswrapper[4776]: I1011 10:55:17.610187 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="probe" Oct 11 10:55:17.610464 master-2 kubenswrapper[4776]: I1011 10:55:17.610442 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="cinder-scheduler" Oct 11 10:55:17.610634 master-2 kubenswrapper[4776]: I1011 10:55:17.610473 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" containerName="dnsmasq-dns" Oct 11 10:55:17.610634 master-2 kubenswrapper[4776]: I1011 10:55:17.610491 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" containerName="probe" Oct 11 10:55:17.613340 master-2 kubenswrapper[4776]: I1011 10:55:17.613307 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.620582 master-2 kubenswrapper[4776]: I1011 10:55:17.620533 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-scheduler-config-data" Oct 11 10:55:17.635101 master-2 kubenswrapper[4776]: I1011 10:55:17.633377 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:17.653334 master-2 kubenswrapper[4776]: I1011 10:55:17.653194 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-swift-storage-0\") pod \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " Oct 11 10:55:17.653334 master-2 kubenswrapper[4776]: I1011 10:55:17.653265 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4jjl\" (UniqueName: \"kubernetes.io/projected/941fb918-e4b8-4ef7-9ad1-9af907c5593a-kube-api-access-k4jjl\") pod \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " Oct 11 10:55:17.653480 master-2 kubenswrapper[4776]: I1011 10:55:17.653410 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-nb\") pod \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " Oct 11 10:55:17.653480 master-2 kubenswrapper[4776]: I1011 10:55:17.653442 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-config\") pod \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " Oct 11 10:55:17.653480 master-2 kubenswrapper[4776]: I1011 10:55:17.653470 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-svc\") pod \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " Oct 11 10:55:17.653584 master-2 kubenswrapper[4776]: I1011 10:55:17.653504 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-sb\") pod \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\" (UID: \"941fb918-e4b8-4ef7-9ad1-9af907c5593a\") " Oct 11 10:55:17.660154 master-2 kubenswrapper[4776]: I1011 10:55:17.659783 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941fb918-e4b8-4ef7-9ad1-9af907c5593a-kube-api-access-k4jjl" (OuterVolumeSpecName: "kube-api-access-k4jjl") pod "941fb918-e4b8-4ef7-9ad1-9af907c5593a" (UID: "941fb918-e4b8-4ef7-9ad1-9af907c5593a"). InnerVolumeSpecName "kube-api-access-k4jjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:17.734916 master-2 kubenswrapper[4776]: I1011 10:55:17.734832 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "941fb918-e4b8-4ef7-9ad1-9af907c5593a" (UID: "941fb918-e4b8-4ef7-9ad1-9af907c5593a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:17.740604 master-2 kubenswrapper[4776]: I1011 10:55:17.740561 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-config" (OuterVolumeSpecName: "config") pod "941fb918-e4b8-4ef7-9ad1-9af907c5593a" (UID: "941fb918-e4b8-4ef7-9ad1-9af907c5593a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:17.741240 master-2 kubenswrapper[4776]: I1011 10:55:17.741208 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "941fb918-e4b8-4ef7-9ad1-9af907c5593a" (UID: "941fb918-e4b8-4ef7-9ad1-9af907c5593a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756256 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-combined-ca-bundle\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756358 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-scripts\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756446 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-config-data-custom\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756516 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-config-data\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756546 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-etc-machine-id\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756613 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvwb\" (UniqueName: \"kubernetes.io/projected/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-kube-api-access-rrvwb\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756722 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756739 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756754 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.756770 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4jjl\" (UniqueName: \"kubernetes.io/projected/941fb918-e4b8-4ef7-9ad1-9af907c5593a-kube-api-access-k4jjl\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.759709 master-2 kubenswrapper[4776]: I1011 10:55:17.759128 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "941fb918-e4b8-4ef7-9ad1-9af907c5593a" (UID: "941fb918-e4b8-4ef7-9ad1-9af907c5593a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:17.859432 master-2 kubenswrapper[4776]: I1011 10:55:17.859210 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-etc-machine-id\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.859432 master-2 kubenswrapper[4776]: I1011 10:55:17.859340 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvwb\" (UniqueName: \"kubernetes.io/projected/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-kube-api-access-rrvwb\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.859432 master-2 kubenswrapper[4776]: I1011 10:55:17.859426 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-combined-ca-bundle\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.860243 master-2 kubenswrapper[4776]: I1011 10:55:17.859481 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-scripts\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.860243 master-2 kubenswrapper[4776]: I1011 10:55:17.859574 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-config-data-custom\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.860243 master-2 kubenswrapper[4776]: I1011 10:55:17.859656 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-config-data\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.860243 master-2 kubenswrapper[4776]: I1011 10:55:17.859759 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:17.861059 master-2 kubenswrapper[4776]: I1011 10:55:17.861000 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-etc-machine-id\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.867833 master-2 kubenswrapper[4776]: I1011 10:55:17.867700 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-config-data\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.868554 master-2 kubenswrapper[4776]: I1011 10:55:17.868064 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-scripts\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.875328 master-2 kubenswrapper[4776]: I1011 10:55:17.875293 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-config-data-custom\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.880258 master-2 kubenswrapper[4776]: I1011 10:55:17.880191 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-combined-ca-bundle\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.903487 master-2 kubenswrapper[4776]: I1011 10:55:17.903372 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvwb\" (UniqueName: \"kubernetes.io/projected/eaeb2b93-f2cd-4a03-961c-b9127f72a9d0-kube-api-access-rrvwb\") pod \"cinder-b5802-scheduler-0\" (UID: \"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0\") " pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:17.956530 master-2 kubenswrapper[4776]: I1011 10:55:17.956482 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:18.006597 master-2 kubenswrapper[4776]: I1011 10:55:18.006517 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "941fb918-e4b8-4ef7-9ad1-9af907c5593a" (UID: "941fb918-e4b8-4ef7-9ad1-9af907c5593a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:18.068141 master-2 kubenswrapper[4776]: I1011 10:55:18.068090 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/941fb918-e4b8-4ef7-9ad1-9af907c5593a-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.075733 master-2 kubenswrapper[4776]: I1011 10:55:18.073156 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e1008bd-5444-4490-8e34-8a7843bf5c45" path="/var/lib/kubelet/pods/6e1008bd-5444-4490-8e34-8a7843bf5c45/volumes" Oct 11 10:55:18.183791 master-2 kubenswrapper[4776]: I1011 10:55:18.182800 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" event={"ID":"941fb918-e4b8-4ef7-9ad1-9af907c5593a","Type":"ContainerDied","Data":"512e94719fefec26efbc4a52f6f8eafa4db87fb35a6caaa6e2a7c8012427af32"} Oct 11 10:55:18.183791 master-2 kubenswrapper[4776]: I1011 10:55:18.182909 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f6b86c79-52ppr" Oct 11 10:55:18.183791 master-2 kubenswrapper[4776]: I1011 10:55:18.183081 4776 scope.go:117] "RemoveContainer" containerID="8b6ad17ffd3f1f6962cd2f8e96a39bece7f7bf59db6192abf67589a9326a7807" Oct 11 10:55:18.191102 master-2 kubenswrapper[4776]: I1011 10:55:18.190941 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" event={"ID":"0f57d76b-0f9f-41bd-906b-dac5a7e5a986","Type":"ContainerStarted","Data":"6e1bfd14f3480ae7a609ce5fbee7ed185e3ebb89b3f68759b4f189fe0bdcf590"} Oct 11 10:55:18.191102 master-2 kubenswrapper[4776]: I1011 10:55:18.190997 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" event={"ID":"0f57d76b-0f9f-41bd-906b-dac5a7e5a986","Type":"ContainerStarted","Data":"d7f0d7f8cd82c9acd61280069c44b22a1e6064adff1007bf7bcc19c4493bdd20"} Oct 11 10:55:18.192218 master-2 kubenswrapper[4776]: I1011 10:55:18.191783 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:18.192440 master-2 kubenswrapper[4776]: I1011 10:55:18.192316 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:18.198039 master-2 kubenswrapper[4776]: I1011 10:55:18.197981 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"d0cc3965-fd34-4442-b408-a5ae441443e4","Type":"ContainerDied","Data":"30c682488fe3a4cbd0fa8fdf8a635610245b1344157964f363074a15ace75969"} Oct 11 10:55:18.199611 master-2 kubenswrapper[4776]: I1011 10:55:18.199099 4776 generic.go:334] "Generic (PLEG): container finished" podID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerID="30c682488fe3a4cbd0fa8fdf8a635610245b1344157964f363074a15ace75969" exitCode=0 Oct 11 10:55:18.204685 master-2 kubenswrapper[4776]: I1011 10:55:18.204536 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-86bdd47775-gpz8z" event={"ID":"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49","Type":"ContainerStarted","Data":"d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c"} Oct 11 10:55:18.204685 master-2 kubenswrapper[4776]: I1011 10:55:18.204628 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-86bdd47775-gpz8z" event={"ID":"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49","Type":"ContainerStarted","Data":"85ac756e7a970e46ccdb81506b9b8549165f9c0b853da21e24277ff1af233582"} Oct 11 10:55:18.204962 master-2 kubenswrapper[4776]: I1011 10:55:18.204921 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:18.206129 master-2 kubenswrapper[4776]: I1011 10:55:18.206060 4776 scope.go:117] "RemoveContainer" containerID="164c282461ad0735fec232ffd6ba306dca0fc05dcf0a0b68a9bb53e9c6e9c07c" Oct 11 10:55:18.525432 master-2 kubenswrapper[4776]: I1011 10:55:18.525363 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-72cf197d-b530-4337-9ce7-c4684efc1643\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29100426-23f3-402d-9fda-ad3e2743ec8a\") pod \"ironic-conductor-0\" (UID: \"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5\") " pod="openstack/ironic-conductor-0" Oct 11 10:55:18.549462 master-2 kubenswrapper[4776]: I1011 10:55:18.549390 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:18.681002 master-2 kubenswrapper[4776]: I1011 10:55:18.680907 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-combined-ca-bundle\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.681537 master-2 kubenswrapper[4776]: I1011 10:55:18.681515 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-httpd-run\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.681976 master-2 kubenswrapper[4776]: I1011 10:55:18.681953 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbzvc\" (UniqueName: \"kubernetes.io/projected/d0cc3965-fd34-4442-b408-a5ae441443e4-kube-api-access-jbzvc\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.682233 master-2 kubenswrapper[4776]: I1011 10:55:18.682215 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-config-data\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.682975 master-2 kubenswrapper[4776]: I1011 10:55:18.682955 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.683189 master-2 kubenswrapper[4776]: I1011 10:55:18.683168 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-internal-tls-certs\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.683293 master-2 kubenswrapper[4776]: I1011 10:55:18.683276 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-logs\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.683390 master-2 kubenswrapper[4776]: I1011 10:55:18.683375 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-scripts\") pod \"d0cc3965-fd34-4442-b408-a5ae441443e4\" (UID: \"d0cc3965-fd34-4442-b408-a5ae441443e4\") " Oct 11 10:55:18.684402 master-2 kubenswrapper[4776]: I1011 10:55:18.683283 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:18.687534 master-2 kubenswrapper[4776]: I1011 10:55:18.685075 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-logs" (OuterVolumeSpecName: "logs") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:18.688345 master-2 kubenswrapper[4776]: I1011 10:55:18.688061 4776 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-httpd-run\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.688345 master-2 kubenswrapper[4776]: I1011 10:55:18.688052 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-86bdd47775-gpz8z" podStartSLOduration=3.688033387 podStartE2EDuration="3.688033387s" podCreationTimestamp="2025-10-11 10:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:18.500773225 +0000 UTC m=+1753.285199934" watchObservedRunningTime="2025-10-11 10:55:18.688033387 +0000 UTC m=+1753.472460096" Oct 11 10:55:18.688954 master-2 kubenswrapper[4776]: I1011 10:55:18.688844 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-scripts" (OuterVolumeSpecName: "scripts") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:18.688954 master-2 kubenswrapper[4776]: I1011 10:55:18.688090 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0cc3965-fd34-4442-b408-a5ae441443e4-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.688954 master-2 kubenswrapper[4776]: I1011 10:55:18.688874 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0cc3965-fd34-4442-b408-a5ae441443e4-kube-api-access-jbzvc" (OuterVolumeSpecName: "kube-api-access-jbzvc") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "kube-api-access-jbzvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:18.707406 master-2 kubenswrapper[4776]: I1011 10:55:18.707361 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574" (OuterVolumeSpecName: "glance") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:55:18.710618 master-2 kubenswrapper[4776]: W1011 10:55:18.709948 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaeb2b93_f2cd_4a03_961c_b9127f72a9d0.slice/crio-1fe678aa21f1fd8da745571265cb6a0e528c6cfbae4416ae9ce21d21aa2d5d8e WatchSource:0}: Error finding container 1fe678aa21f1fd8da745571265cb6a0e528c6cfbae4416ae9ce21d21aa2d5d8e: Status 404 returned error can't find the container with id 1fe678aa21f1fd8da745571265cb6a0e528c6cfbae4416ae9ce21d21aa2d5d8e Oct 11 10:55:18.710618 master-2 kubenswrapper[4776]: I1011 10:55:18.710387 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-scheduler-0"] Oct 11 10:55:18.717784 master-2 kubenswrapper[4776]: I1011 10:55:18.717494 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:18.734056 master-2 kubenswrapper[4776]: I1011 10:55:18.730809 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-config-data" (OuterVolumeSpecName: "config-data") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:18.756598 master-2 kubenswrapper[4776]: I1011 10:55:18.741204 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d0cc3965-fd34-4442-b408-a5ae441443e4" (UID: "d0cc3965-fd34-4442-b408-a5ae441443e4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:18.792113 master-2 kubenswrapper[4776]: I1011 10:55:18.792025 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.793121 master-2 kubenswrapper[4776]: I1011 10:55:18.793080 4776 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") on node \"master-2\" " Oct 11 10:55:18.793121 master-2 kubenswrapper[4776]: I1011 10:55:18.793114 4776 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-internal-tls-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.793204 master-2 kubenswrapper[4776]: I1011 10:55:18.793127 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.793204 master-2 kubenswrapper[4776]: I1011 10:55:18.793139 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3965-fd34-4442-b408-a5ae441443e4-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.793204 master-2 kubenswrapper[4776]: I1011 10:55:18.793150 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbzvc\" (UniqueName: \"kubernetes.io/projected/d0cc3965-fd34-4442-b408-a5ae441443e4-kube-api-access-jbzvc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.841380 master-2 kubenswrapper[4776]: I1011 10:55:18.841289 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" podStartSLOduration=3.841265117 podStartE2EDuration="3.841265117s" podCreationTimestamp="2025-10-11 10:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:18.766099901 +0000 UTC m=+1753.550526610" watchObservedRunningTime="2025-10-11 10:55:18.841265117 +0000 UTC m=+1753.625691826" Oct 11 10:55:18.852547 master-2 kubenswrapper[4776]: I1011 10:55:18.852473 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f6b86c79-52ppr"] Oct 11 10:55:18.855858 master-2 kubenswrapper[4776]: I1011 10:55:18.855827 4776 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:55:18.856171 master-2 kubenswrapper[4776]: I1011 10:55:18.856153 4776 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c" (UniqueName: "kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574") on node "master-2" Oct 11 10:55:18.902720 master-2 kubenswrapper[4776]: I1011 10:55:18.902655 4776 reconciler_common.go:293] "Volume detached for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:18.938580 master-2 kubenswrapper[4776]: I1011 10:55:18.938532 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9f6b86c79-52ppr"] Oct 11 10:55:19.018448 master-2 kubenswrapper[4776]: I1011 10:55:19.017667 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Oct 11 10:55:19.272766 master-2 kubenswrapper[4776]: I1011 10:55:19.248030 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" event={"ID":"0f57d76b-0f9f-41bd-906b-dac5a7e5a986","Type":"ContainerStarted","Data":"90e5c2646fea77ee3381b10c994dfe8b9170fa307052809ee3bd672c8f8c09e5"} Oct 11 10:55:19.272766 master-2 kubenswrapper[4776]: I1011 10:55:19.270952 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"d0cc3965-fd34-4442-b408-a5ae441443e4","Type":"ContainerDied","Data":"020cb51e8f192e46e701d1c522ecf5cc9d035525d4d7b945c86775cc56da8867"} Oct 11 10:55:19.272766 master-2 kubenswrapper[4776]: I1011 10:55:19.271007 4776 scope.go:117] "RemoveContainer" containerID="30c682488fe3a4cbd0fa8fdf8a635610245b1344157964f363074a15ace75969" Oct 11 10:55:19.272766 master-2 kubenswrapper[4776]: I1011 10:55:19.271153 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.309009 master-2 kubenswrapper[4776]: I1011 10:55:19.293719 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0","Type":"ContainerStarted","Data":"1fe678aa21f1fd8da745571265cb6a0e528c6cfbae4416ae9ce21d21aa2d5d8e"} Oct 11 10:55:19.524877 master-2 kubenswrapper[4776]: I1011 10:55:19.524754 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:55:19.612291 master-2 kubenswrapper[4776]: I1011 10:55:19.612220 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:55:19.692937 master-2 kubenswrapper[4776]: I1011 10:55:19.692875 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:55:19.693229 master-2 kubenswrapper[4776]: E1011 10:55:19.693210 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-httpd" Oct 11 10:55:19.693229 master-2 kubenswrapper[4776]: I1011 10:55:19.693227 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-httpd" Oct 11 10:55:19.693339 master-2 kubenswrapper[4776]: E1011 10:55:19.693256 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-log" Oct 11 10:55:19.693339 master-2 kubenswrapper[4776]: I1011 10:55:19.693264 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-log" Oct 11 10:55:19.693423 master-2 kubenswrapper[4776]: I1011 10:55:19.693406 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-httpd" Oct 11 10:55:19.693465 master-2 kubenswrapper[4776]: I1011 10:55:19.693429 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" containerName="glance-log" Oct 11 10:55:19.694400 master-2 kubenswrapper[4776]: I1011 10:55:19.694369 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.697199 master-2 kubenswrapper[4776]: I1011 10:55:19.697150 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 11 10:55:19.697625 master-2 kubenswrapper[4776]: I1011 10:55:19.697576 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-internal-config-data" Oct 11 10:55:19.720278 master-2 kubenswrapper[4776]: I1011 10:55:19.720221 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:55:19.825204 master-2 kubenswrapper[4776]: I1011 10:55:19.825075 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825204 master-2 kubenswrapper[4776]: I1011 10:55:19.825130 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825204 master-2 kubenswrapper[4776]: I1011 10:55:19.825159 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825204 master-2 kubenswrapper[4776]: I1011 10:55:19.825176 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnqtp\" (UniqueName: \"kubernetes.io/projected/7353cefe-e495-4633-9472-93497ca94612-kube-api-access-nnqtp\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825204 master-2 kubenswrapper[4776]: I1011 10:55:19.825206 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825557 master-2 kubenswrapper[4776]: I1011 10:55:19.825232 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825557 master-2 kubenswrapper[4776]: I1011 10:55:19.825308 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.825557 master-2 kubenswrapper[4776]: I1011 10:55:19.825337 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927550 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927656 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927693 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927716 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927737 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnqtp\" (UniqueName: \"kubernetes.io/projected/7353cefe-e495-4633-9472-93497ca94612-kube-api-access-nnqtp\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927770 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.927798 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.931283 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.931542 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.931571 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.932721 master-2 kubenswrapper[4776]: I1011 10:55:19.932079 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.935245 master-2 kubenswrapper[4776]: I1011 10:55:19.935201 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:19.936914 master-2 kubenswrapper[4776]: I1011 10:55:19.936891 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:20.029485 master-2 kubenswrapper[4776]: I1011 10:55:20.029428 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:20.032412 master-2 kubenswrapper[4776]: I1011 10:55:20.031860 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:55:20.032412 master-2 kubenswrapper[4776]: I1011 10:55:20.031889 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c5f302d2b867cc737f2daf9c42090b10daaee38f14f31a51f3dbff0cf77a4fd1/globalmount\"" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:20.072650 master-2 kubenswrapper[4776]: I1011 10:55:20.072585 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941fb918-e4b8-4ef7-9ad1-9af907c5593a" path="/var/lib/kubelet/pods/941fb918-e4b8-4ef7-9ad1-9af907c5593a/volumes" Oct 11 10:55:20.073527 master-2 kubenswrapper[4776]: I1011 10:55:20.073444 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0cc3965-fd34-4442-b408-a5ae441443e4" path="/var/lib/kubelet/pods/d0cc3965-fd34-4442-b408-a5ae441443e4/volumes" Oct 11 10:55:20.094640 master-2 kubenswrapper[4776]: I1011 10:55:20.094462 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnqtp\" (UniqueName: \"kubernetes.io/projected/7353cefe-e495-4633-9472-93497ca94612-kube-api-access-nnqtp\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:20.307631 master-2 kubenswrapper[4776]: I1011 10:55:20.307540 4776 generic.go:334] "Generic (PLEG): container finished" podID="64da8a05-f383-4643-b08d-639963f8bdd5" containerID="6cbb734e786ac7a270d6c6e63f2ba748d07f4398724c5368110f5b4040ec1204" exitCode=0 Oct 11 10:55:20.307631 master-2 kubenswrapper[4776]: I1011 10:55:20.307632 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"64da8a05-f383-4643-b08d-639963f8bdd5","Type":"ContainerDied","Data":"6cbb734e786ac7a270d6c6e63f2ba748d07f4398724c5368110f5b4040ec1204"} Oct 11 10:55:20.310610 master-2 kubenswrapper[4776]: I1011 10:55:20.310391 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0","Type":"ContainerStarted","Data":"493a1f0009ac7ed5a732bb40fe0fd5aef3d23c47236e8ce72a5234cf88da9d7d"} Oct 11 10:55:21.312527 master-2 kubenswrapper[4776]: I1011 10:55:21.312426 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:21.512960 master-2 kubenswrapper[4776]: I1011 10:55:21.510518 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:23.933171 master-2 kubenswrapper[4776]: I1011 10:55:23.933085 4776 scope.go:117] "RemoveContainer" containerID="c0a0f68535f1045164a03ee0e1499295237f65e65bc92ffb7cde06bc73007d4d" Oct 11 10:55:24.373653 master-2 kubenswrapper[4776]: I1011 10:55:24.372476 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c1271fdd-4436-4935-b271-89ffa5394bc3","Type":"ContainerStarted","Data":"95f71350a89d3d6fe18419fa54fd58ddaec3ff7d48a3a9105b0b3dfed3802fe6"} Oct 11 10:55:24.404526 master-2 kubenswrapper[4776]: I1011 10:55:24.404294 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.208009944 podStartE2EDuration="13.404276829s" podCreationTimestamp="2025-10-11 10:55:11 +0000 UTC" firstStartedPulling="2025-10-11 10:55:12.84643 +0000 UTC m=+1747.630856709" lastFinishedPulling="2025-10-11 10:55:24.042696875 +0000 UTC m=+1758.827123594" observedRunningTime="2025-10-11 10:55:24.398538283 +0000 UTC m=+1759.182964992" watchObservedRunningTime="2025-10-11 10:55:24.404276829 +0000 UTC m=+1759.188703538" Oct 11 10:55:24.799767 master-2 kubenswrapper[4776]: I1011 10:55:24.799517 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Oct 11 10:55:25.193851 master-2 kubenswrapper[4776]: I1011 10:55:25.192774 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:55:25.330892 master-2 kubenswrapper[4776]: I1011 10:55:25.330299 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-854549b758-grfk2"] Oct 11 10:55:25.339282 master-2 kubenswrapper[4776]: I1011 10:55:25.339134 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.397928 master-2 kubenswrapper[4776]: I1011 10:55:25.391746 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-854549b758-grfk2"] Oct 11 10:55:25.467853 master-2 kubenswrapper[4776]: I1011 10:55:25.467794 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-config-data-custom\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.468344 master-2 kubenswrapper[4776]: I1011 10:55:25.467883 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfzdt\" (UniqueName: \"kubernetes.io/projected/09078de9-6576-4afa-a94e-7b80617bba0f-kube-api-access-kfzdt\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.468344 master-2 kubenswrapper[4776]: I1011 10:55:25.468094 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-combined-ca-bundle\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.468344 master-2 kubenswrapper[4776]: I1011 10:55:25.468168 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-config-data\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.480792 master-2 kubenswrapper[4776]: I1011 10:55:25.474540 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-scheduler-0" event={"ID":"eaeb2b93-f2cd-4a03-961c-b9127f72a9d0","Type":"ContainerStarted","Data":"2014802943eee19ae9bd36e7ec710859ff77c2726f2b76501cfd630a35b1a3c7"} Oct 11 10:55:25.480792 master-2 kubenswrapper[4776]: I1011 10:55:25.479342 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"ddf25d3e6dfc76b87ac6599854740e9e3e9b46a167eab6435fa6a770ea42138e"} Oct 11 10:55:25.480792 master-2 kubenswrapper[4776]: I1011 10:55:25.479389 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"531ec9f06ba8fda6003f0a31b9d895b34ee13acb13d8e6c68604b2bb0e9a0c1b"} Oct 11 10:55:25.507788 master-2 kubenswrapper[4776]: I1011 10:55:25.507286 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"7353cefe-e495-4633-9472-93497ca94612","Type":"ContainerStarted","Data":"9a5837d1cb2c6c6bd2ee66332c6614d5f0898097af5aee94ed2b3ff54ca6ee42"} Oct 11 10:55:25.570669 master-2 kubenswrapper[4776]: I1011 10:55:25.570367 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-config-data-custom\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.570669 master-2 kubenswrapper[4776]: I1011 10:55:25.570448 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-scheduler-0" podStartSLOduration=8.570429523 podStartE2EDuration="8.570429523s" podCreationTimestamp="2025-10-11 10:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:25.506004728 +0000 UTC m=+1760.290431437" watchObservedRunningTime="2025-10-11 10:55:25.570429523 +0000 UTC m=+1760.354856232" Oct 11 10:55:25.574797 master-2 kubenswrapper[4776]: I1011 10:55:25.570482 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfzdt\" (UniqueName: \"kubernetes.io/projected/09078de9-6576-4afa-a94e-7b80617bba0f-kube-api-access-kfzdt\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.574797 master-2 kubenswrapper[4776]: I1011 10:55:25.574489 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-combined-ca-bundle\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.575266 master-2 kubenswrapper[4776]: I1011 10:55:25.575080 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-config-data\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.576868 master-2 kubenswrapper[4776]: I1011 10:55:25.575953 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-config-data-custom\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.587757 master-2 kubenswrapper[4776]: I1011 10:55:25.587650 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-combined-ca-bundle\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.599730 master-2 kubenswrapper[4776]: I1011 10:55:25.599614 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09078de9-6576-4afa-a94e-7b80617bba0f-config-data\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.601988 master-2 kubenswrapper[4776]: I1011 10:55:25.601955 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:25.622640 master-2 kubenswrapper[4776]: I1011 10:55:25.622600 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfzdt\" (UniqueName: \"kubernetes.io/projected/09078de9-6576-4afa-a94e-7b80617bba0f-kube-api-access-kfzdt\") pod \"heat-engine-854549b758-grfk2\" (UID: \"09078de9-6576-4afa-a94e-7b80617bba0f\") " pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676534 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-nvme\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676594 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-brick\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676654 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data-custom\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676727 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-lib-modules\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676795 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676795 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676819 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n864d\" (UniqueName: \"kubernetes.io/projected/64da8a05-f383-4643-b08d-639963f8bdd5-kube-api-access-n864d\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676925 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-sys\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676949 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676938 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676991 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676970 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-lib-cinder\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677029 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-combined-ca-bundle\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677057 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-machine-id\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677075 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-cinder\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677106 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-iscsi\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677132 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-run\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677151 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-dev\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.676996 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-sys" (OuterVolumeSpecName: "sys") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677225 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677312 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-scripts\") pod \"64da8a05-f383-4643-b08d-639963f8bdd5\" (UID: \"64da8a05-f383-4643-b08d-639963f8bdd5\") " Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.677996 4776 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-nvme\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678009 4776 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-brick\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678020 4776 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-lib-modules\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678028 4776 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-sys\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678036 4776 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-lib-cinder\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678044 4776 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-machine-id\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678765 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678836 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678857 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-run" (OuterVolumeSpecName: "run") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.679197 master-2 kubenswrapper[4776]: I1011 10:55:25.678877 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-dev" (OuterVolumeSpecName: "dev") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:25.682854 master-2 kubenswrapper[4776]: I1011 10:55:25.682789 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-scripts" (OuterVolumeSpecName: "scripts") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:25.682970 master-2 kubenswrapper[4776]: I1011 10:55:25.682844 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:25.683058 master-2 kubenswrapper[4776]: I1011 10:55:25.683035 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64da8a05-f383-4643-b08d-639963f8bdd5-kube-api-access-n864d" (OuterVolumeSpecName: "kube-api-access-n864d") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "kube-api-access-n864d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:25.715205 master-2 kubenswrapper[4776]: I1011 10:55:25.715148 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:25.745522 master-2 kubenswrapper[4776]: I1011 10:55:25.745472 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:25.767646 master-2 kubenswrapper[4776]: I1011 10:55:25.767525 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data" (OuterVolumeSpecName: "config-data") pod "64da8a05-f383-4643-b08d-639963f8bdd5" (UID: "64da8a05-f383-4643-b08d-639963f8bdd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:25.779494 master-2 kubenswrapper[4776]: I1011 10:55:25.779332 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.779494 master-2 kubenswrapper[4776]: I1011 10:55:25.779378 4776 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data-custom\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780000 master-2 kubenswrapper[4776]: I1011 10:55:25.779923 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n864d\" (UniqueName: \"kubernetes.io/projected/64da8a05-f383-4643-b08d-639963f8bdd5-kube-api-access-n864d\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780045 master-2 kubenswrapper[4776]: I1011 10:55:25.780020 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780045 master-2 kubenswrapper[4776]: I1011 10:55:25.780033 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64da8a05-f383-4643-b08d-639963f8bdd5-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780045 master-2 kubenswrapper[4776]: I1011 10:55:25.780042 4776 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-var-locks-cinder\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780148 master-2 kubenswrapper[4776]: I1011 10:55:25.780051 4776 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-etc-iscsi\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780148 master-2 kubenswrapper[4776]: I1011 10:55:25.780059 4776 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-run\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:25.780148 master-2 kubenswrapper[4776]: I1011 10:55:25.780067 4776 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/64da8a05-f383-4643-b08d-639963f8bdd5-dev\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:26.093556 master-2 kubenswrapper[4776]: I1011 10:55:26.093479 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:26.093803 master-2 kubenswrapper[4776]: I1011 10:55:26.093666 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-66dfdcbff8-j4jhs" Oct 11 10:55:26.229154 master-2 kubenswrapper[4776]: I1011 10:55:26.229109 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-854549b758-grfk2"] Oct 11 10:55:26.548906 master-2 kubenswrapper[4776]: I1011 10:55:26.548776 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-854549b758-grfk2" event={"ID":"09078de9-6576-4afa-a94e-7b80617bba0f","Type":"ContainerStarted","Data":"1863112160e8eb8f7f94a0016bc5baba00482733893cceacc222fc426a64fc57"} Oct 11 10:55:26.548906 master-2 kubenswrapper[4776]: I1011 10:55:26.548848 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-854549b758-grfk2" event={"ID":"09078de9-6576-4afa-a94e-7b80617bba0f","Type":"ContainerStarted","Data":"2fc0528238f103153bbac9e7a09546643ab74ad3439dc5514ba73dfd3aee059e"} Oct 11 10:55:26.548906 master-2 kubenswrapper[4776]: I1011 10:55:26.548888 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:26.551422 master-2 kubenswrapper[4776]: I1011 10:55:26.551289 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"7353cefe-e495-4633-9472-93497ca94612","Type":"ContainerStarted","Data":"d313f81d0a8278bef4e01893ade947879d98007ed6a8ec60b3e2299775595e63"} Oct 11 10:55:26.554194 master-2 kubenswrapper[4776]: I1011 10:55:26.553688 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.554309 master-2 kubenswrapper[4776]: I1011 10:55:26.554274 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"64da8a05-f383-4643-b08d-639963f8bdd5","Type":"ContainerDied","Data":"f08b0710ef4008f1cdc3f5be94ecdbead3ce33922cbd677781666c53de5eb3c1"} Oct 11 10:55:26.554309 master-2 kubenswrapper[4776]: I1011 10:55:26.554308 4776 scope.go:117] "RemoveContainer" containerID="5510c48b8a1b349e6fccbe8283441e4880694e565021961bec519004a98d24f9" Oct 11 10:55:26.585399 master-2 kubenswrapper[4776]: I1011 10:55:26.585330 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-854549b758-grfk2" podStartSLOduration=1.58530827 podStartE2EDuration="1.58530827s" podCreationTimestamp="2025-10-11 10:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:26.576410409 +0000 UTC m=+1761.360837118" watchObservedRunningTime="2025-10-11 10:55:26.58530827 +0000 UTC m=+1761.369734979" Oct 11 10:55:26.598258 master-2 kubenswrapper[4776]: I1011 10:55:26.598196 4776 scope.go:117] "RemoveContainer" containerID="6cbb734e786ac7a270d6c6e63f2ba748d07f4398724c5368110f5b4040ec1204" Oct 11 10:55:26.626444 master-2 kubenswrapper[4776]: I1011 10:55:26.626051 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:26.645235 master-2 kubenswrapper[4776]: I1011 10:55:26.645187 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:26.697936 master-2 kubenswrapper[4776]: I1011 10:55:26.697872 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:26.698307 master-2 kubenswrapper[4776]: E1011 10:55:26.698271 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="probe" Oct 11 10:55:26.698307 master-2 kubenswrapper[4776]: I1011 10:55:26.698298 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="probe" Oct 11 10:55:26.698419 master-2 kubenswrapper[4776]: E1011 10:55:26.698315 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="cinder-backup" Oct 11 10:55:26.698419 master-2 kubenswrapper[4776]: I1011 10:55:26.698325 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="cinder-backup" Oct 11 10:55:26.698581 master-2 kubenswrapper[4776]: I1011 10:55:26.698542 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="cinder-backup" Oct 11 10:55:26.698581 master-2 kubenswrapper[4776]: I1011 10:55:26.698567 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" containerName="probe" Oct 11 10:55:26.699559 master-2 kubenswrapper[4776]: I1011 10:55:26.699522 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:26.699638 master-2 kubenswrapper[4776]: I1011 10:55:26.699614 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.704776 master-2 kubenswrapper[4776]: I1011 10:55:26.703373 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-backup-config-data" Oct 11 10:55:26.918409 master-2 kubenswrapper[4776]: I1011 10:55:26.918079 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-locks-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918409 master-2 kubenswrapper[4776]: I1011 10:55:26.918234 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-lib-modules\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918409 master-2 kubenswrapper[4776]: I1011 10:55:26.918317 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-scripts\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918788 master-2 kubenswrapper[4776]: I1011 10:55:26.918514 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-run\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918788 master-2 kubenswrapper[4776]: I1011 10:55:26.918655 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-config-data\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918788 master-2 kubenswrapper[4776]: I1011 10:55:26.918718 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-sys\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918788 master-2 kubenswrapper[4776]: I1011 10:55:26.918746 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-dev\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918788 master-2 kubenswrapper[4776]: I1011 10:55:26.918784 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-combined-ca-bundle\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918989 master-2 kubenswrapper[4776]: I1011 10:55:26.918817 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-config-data-custom\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918989 master-2 kubenswrapper[4776]: I1011 10:55:26.918856 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-lib-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918989 master-2 kubenswrapper[4776]: I1011 10:55:26.918886 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-machine-id\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918989 master-2 kubenswrapper[4776]: I1011 10:55:26.918912 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-locks-brick\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918989 master-2 kubenswrapper[4776]: I1011 10:55:26.918928 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwv5n\" (UniqueName: \"kubernetes.io/projected/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-kube-api-access-zwv5n\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.918989 master-2 kubenswrapper[4776]: I1011 10:55:26.918946 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-iscsi\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:26.919193 master-2 kubenswrapper[4776]: I1011 10:55:26.919022 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-nvme\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020278 master-2 kubenswrapper[4776]: I1011 10:55:27.020224 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-machine-id\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020509 master-2 kubenswrapper[4776]: I1011 10:55:27.020496 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-locks-brick\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020592 master-2 kubenswrapper[4776]: I1011 10:55:27.020580 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwv5n\" (UniqueName: \"kubernetes.io/projected/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-kube-api-access-zwv5n\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020732 master-2 kubenswrapper[4776]: I1011 10:55:27.020714 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-iscsi\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020848 master-2 kubenswrapper[4776]: I1011 10:55:27.020458 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-machine-id\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020893 master-2 kubenswrapper[4776]: I1011 10:55:27.020619 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-locks-brick\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.020938 master-2 kubenswrapper[4776]: I1011 10:55:27.020836 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-nvme\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021113 master-2 kubenswrapper[4776]: I1011 10:55:27.020940 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-iscsi\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021161 master-2 kubenswrapper[4776]: I1011 10:55:27.020981 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-etc-nvme\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021161 master-2 kubenswrapper[4776]: I1011 10:55:27.021096 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-locks-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021238 master-2 kubenswrapper[4776]: I1011 10:55:27.021216 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-lib-modules\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021287 master-2 kubenswrapper[4776]: I1011 10:55:27.021255 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-scripts\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021317 master-2 kubenswrapper[4776]: I1011 10:55:27.021285 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-run\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021317 master-2 kubenswrapper[4776]: I1011 10:55:27.021306 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-lib-modules\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021377 master-2 kubenswrapper[4776]: I1011 10:55:27.021346 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-config-data\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021413 master-2 kubenswrapper[4776]: I1011 10:55:27.021398 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-sys\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021481 master-2 kubenswrapper[4776]: I1011 10:55:27.021419 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-dev\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021481 master-2 kubenswrapper[4776]: I1011 10:55:27.021473 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-combined-ca-bundle\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021568 master-2 kubenswrapper[4776]: I1011 10:55:27.021515 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-config-data-custom\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021601 master-2 kubenswrapper[4776]: I1011 10:55:27.021569 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-lib-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021694 master-2 kubenswrapper[4776]: I1011 10:55:27.021628 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-sys\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021756 master-2 kubenswrapper[4776]: I1011 10:55:27.021345 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-run\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021804 master-2 kubenswrapper[4776]: I1011 10:55:27.021743 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-lib-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021804 master-2 kubenswrapper[4776]: I1011 10:55:27.021761 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-dev\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.021903 master-2 kubenswrapper[4776]: I1011 10:55:27.021885 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-var-locks-cinder\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.024649 master-2 kubenswrapper[4776]: I1011 10:55:27.024585 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-scripts\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.024977 master-2 kubenswrapper[4776]: I1011 10:55:27.024949 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-combined-ca-bundle\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.026034 master-2 kubenswrapper[4776]: I1011 10:55:27.026002 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-config-data-custom\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.027527 master-2 kubenswrapper[4776]: I1011 10:55:27.027498 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-config-data\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.058449 master-2 kubenswrapper[4776]: I1011 10:55:27.058381 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwv5n\" (UniqueName: \"kubernetes.io/projected/e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a-kube-api-access-zwv5n\") pod \"cinder-b5802-backup-0\" (UID: \"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a\") " pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.137492 master-2 kubenswrapper[4776]: I1011 10:55:27.137419 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-db76b8b85-xpl75"] Oct 11 10:55:27.138985 master-2 kubenswrapper[4776]: I1011 10:55:27.138951 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.142912 master-2 kubenswrapper[4776]: I1011 10:55:27.142871 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Oct 11 10:55:27.143130 master-2 kubenswrapper[4776]: I1011 10:55:27.143107 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Oct 11 10:55:27.143307 master-2 kubenswrapper[4776]: I1011 10:55:27.143282 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Oct 11 10:55:27.161925 master-2 kubenswrapper[4776]: I1011 10:55:27.161832 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-db76b8b85-xpl75"] Oct 11 10:55:27.320025 master-2 kubenswrapper[4776]: I1011 10:55:27.319975 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:27.327728 master-2 kubenswrapper[4776]: I1011 10:55:27.326582 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-config-data-custom\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.327728 master-2 kubenswrapper[4776]: I1011 10:55:27.326756 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstld\" (UniqueName: \"kubernetes.io/projected/803272bc-1d03-4e1f-af8a-42b8d6e029d1-kube-api-access-lstld\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.327728 master-2 kubenswrapper[4776]: I1011 10:55:27.326790 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-public-tls-certs\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.327728 master-2 kubenswrapper[4776]: I1011 10:55:27.326881 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-internal-tls-certs\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.327728 master-2 kubenswrapper[4776]: I1011 10:55:27.326902 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-combined-ca-bundle\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.327728 master-2 kubenswrapper[4776]: I1011 10:55:27.326928 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-config-data\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.429489 master-2 kubenswrapper[4776]: I1011 10:55:27.429386 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lstld\" (UniqueName: \"kubernetes.io/projected/803272bc-1d03-4e1f-af8a-42b8d6e029d1-kube-api-access-lstld\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.429489 master-2 kubenswrapper[4776]: I1011 10:55:27.429454 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-public-tls-certs\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.429741 master-2 kubenswrapper[4776]: I1011 10:55:27.429515 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-internal-tls-certs\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.429741 master-2 kubenswrapper[4776]: I1011 10:55:27.429532 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-combined-ca-bundle\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.429741 master-2 kubenswrapper[4776]: I1011 10:55:27.429558 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-config-data\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.429741 master-2 kubenswrapper[4776]: I1011 10:55:27.429588 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-config-data-custom\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.435237 master-2 kubenswrapper[4776]: I1011 10:55:27.435189 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-config-data-custom\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.436599 master-2 kubenswrapper[4776]: I1011 10:55:27.436554 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-public-tls-certs\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.438044 master-2 kubenswrapper[4776]: I1011 10:55:27.437996 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-config-data\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.438147 master-2 kubenswrapper[4776]: I1011 10:55:27.438109 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-combined-ca-bundle\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.439444 master-2 kubenswrapper[4776]: I1011 10:55:27.439382 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/803272bc-1d03-4e1f-af8a-42b8d6e029d1-internal-tls-certs\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.452455 master-2 kubenswrapper[4776]: I1011 10:55:27.452375 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lstld\" (UniqueName: \"kubernetes.io/projected/803272bc-1d03-4e1f-af8a-42b8d6e029d1-kube-api-access-lstld\") pod \"heat-api-db76b8b85-xpl75\" (UID: \"803272bc-1d03-4e1f-af8a-42b8d6e029d1\") " pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.458523 master-2 kubenswrapper[4776]: I1011 10:55:27.458458 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:27.565696 master-2 kubenswrapper[4776]: I1011 10:55:27.564297 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"7353cefe-e495-4633-9472-93497ca94612","Type":"ContainerStarted","Data":"3b4c4342002f1ca53ae36a5961557e4addd090f4ce6b5d57db4005435cbb0b8e"} Oct 11 10:55:27.598652 master-2 kubenswrapper[4776]: I1011 10:55:27.598579 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-internal-api-0" podStartSLOduration=8.598488932 podStartE2EDuration="8.598488932s" podCreationTimestamp="2025-10-11 10:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:27.591762389 +0000 UTC m=+1762.376189098" watchObservedRunningTime="2025-10-11 10:55:27.598488932 +0000 UTC m=+1762.382915641" Oct 11 10:55:27.958720 master-2 kubenswrapper[4776]: I1011 10:55:27.957309 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:28.129975 master-2 kubenswrapper[4776]: I1011 10:55:28.129909 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64da8a05-f383-4643-b08d-639963f8bdd5" path="/var/lib/kubelet/pods/64da8a05-f383-4643-b08d-639963f8bdd5/volumes" Oct 11 10:55:28.130727 master-2 kubenswrapper[4776]: I1011 10:55:28.130644 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-db76b8b85-xpl75"] Oct 11 10:55:28.130801 master-2 kubenswrapper[4776]: I1011 10:55:28.130732 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-backup-0"] Oct 11 10:55:28.233326 master-2 kubenswrapper[4776]: I1011 10:55:28.233251 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b5802-scheduler-0" Oct 11 10:55:28.576857 master-2 kubenswrapper[4776]: I1011 10:55:28.576818 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-db76b8b85-xpl75" event={"ID":"803272bc-1d03-4e1f-af8a-42b8d6e029d1","Type":"ContainerStarted","Data":"9d62c43b387d0c33d9adad19285acd9956ee4a3d6676c951e46cdaf5931fc6f0"} Oct 11 10:55:28.580360 master-2 kubenswrapper[4776]: I1011 10:55:28.580316 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a","Type":"ContainerStarted","Data":"1149fabecee31777f28d014ce56af4789a79360d8446c24f16e86a789f09e7db"} Oct 11 10:55:28.580360 master-2 kubenswrapper[4776]: I1011 10:55:28.580355 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a","Type":"ContainerStarted","Data":"448518ea4caccce3fab360b053c0052558f01a28be4a8199bdfabace42d3327b"} Oct 11 10:55:28.581940 master-2 kubenswrapper[4776]: I1011 10:55:28.581892 4776 generic.go:334] "Generic (PLEG): container finished" podID="98ff7c8d-cc7c-4b25-917b-88dfa7f837c5" containerID="ddf25d3e6dfc76b87ac6599854740e9e3e9b46a167eab6435fa6a770ea42138e" exitCode=0 Oct 11 10:55:28.582260 master-2 kubenswrapper[4776]: I1011 10:55:28.582229 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerDied","Data":"ddf25d3e6dfc76b87ac6599854740e9e3e9b46a167eab6435fa6a770ea42138e"} Oct 11 10:55:29.602153 master-2 kubenswrapper[4776]: I1011 10:55:29.602033 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-backup-0" event={"ID":"e0ec933d-a60f-4ce1-9d1a-f6ff5767f56a","Type":"ContainerStarted","Data":"0702bc6f895affee86cc9e5d5ededd1bae1cb5a2466e13dd02b7ec597bef759d"} Oct 11 10:55:31.512163 master-2 kubenswrapper[4776]: I1011 10:55:31.512126 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:31.512691 master-2 kubenswrapper[4776]: I1011 10:55:31.512222 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:31.574796 master-2 kubenswrapper[4776]: I1011 10:55:31.569968 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:31.589530 master-2 kubenswrapper[4776]: I1011 10:55:31.589485 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:31.612906 master-2 kubenswrapper[4776]: I1011 10:55:31.612837 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-backup-0" podStartSLOduration=5.612815097 podStartE2EDuration="5.612815097s" podCreationTimestamp="2025-10-11 10:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:29.779278517 +0000 UTC m=+1764.563705236" watchObservedRunningTime="2025-10-11 10:55:31.612815097 +0000 UTC m=+1766.397241806" Oct 11 10:55:31.646873 master-2 kubenswrapper[4776]: I1011 10:55:31.646527 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-db76b8b85-xpl75" event={"ID":"803272bc-1d03-4e1f-af8a-42b8d6e029d1","Type":"ContainerStarted","Data":"5796768e6f59a267b44c87a688513b751c2a2db01fdde74286cf46500fe7d585"} Oct 11 10:55:31.646873 master-2 kubenswrapper[4776]: I1011 10:55:31.646668 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:31.647093 master-2 kubenswrapper[4776]: I1011 10:55:31.647078 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:31.647139 master-2 kubenswrapper[4776]: I1011 10:55:31.647098 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:31.697873 master-2 kubenswrapper[4776]: I1011 10:55:31.696597 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-db76b8b85-xpl75" podStartSLOduration=2.37286526 podStartE2EDuration="4.696575156s" podCreationTimestamp="2025-10-11 10:55:27 +0000 UTC" firstStartedPulling="2025-10-11 10:55:28.198379689 +0000 UTC m=+1762.982806398" lastFinishedPulling="2025-10-11 10:55:30.522089585 +0000 UTC m=+1765.306516294" observedRunningTime="2025-10-11 10:55:31.68678995 +0000 UTC m=+1766.471216669" watchObservedRunningTime="2025-10-11 10:55:31.696575156 +0000 UTC m=+1766.481001865" Oct 11 10:55:32.321051 master-2 kubenswrapper[4776]: I1011 10:55:32.320978 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:33.703139 master-2 kubenswrapper[4776]: I1011 10:55:33.703098 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:34.623235 master-2 kubenswrapper[4776]: I1011 10:55:34.623080 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:55:34.907381 master-2 kubenswrapper[4776]: I1011 10:55:34.905725 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:55:36.362773 master-2 kubenswrapper[4776]: I1011 10:55:36.362725 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:37.527120 master-2 kubenswrapper[4776]: I1011 10:55:37.525069 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b5802-backup-0" Oct 11 10:55:38.706274 master-2 kubenswrapper[4776]: I1011 10:55:38.706215 4776 generic.go:334] "Generic (PLEG): container finished" podID="7e99b787-4e9b-4285-b175-63008b7e39de" containerID="d2f58a8e0319242a1b64a2af94772f7b9698c1eaa7f642a654e5e11cc2fe7f89" exitCode=137 Oct 11 10:55:38.706274 master-2 kubenswrapper[4776]: I1011 10:55:38.706266 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"7e99b787-4e9b-4285-b175-63008b7e39de","Type":"ContainerDied","Data":"d2f58a8e0319242a1b64a2af94772f7b9698c1eaa7f642a654e5e11cc2fe7f89"} Oct 11 10:55:38.813693 master-2 kubenswrapper[4776]: I1011 10:55:38.813139 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-db76b8b85-xpl75" Oct 11 10:55:39.128658 master-2 kubenswrapper[4776]: I1011 10:55:39.128587 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.227315 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e99b787-4e9b-4285-b175-63008b7e39de-etc-machine-id\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.227449 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-combined-ca-bundle\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.227992 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-scripts\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.228032 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e99b787-4e9b-4285-b175-63008b7e39de-logs\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.228145 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.228178 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqgnt\" (UniqueName: \"kubernetes.io/projected/7e99b787-4e9b-4285-b175-63008b7e39de-kube-api-access-cqgnt\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.228757 master-2 kubenswrapper[4776]: I1011 10:55:39.228259 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data-custom\") pod \"7e99b787-4e9b-4285-b175-63008b7e39de\" (UID: \"7e99b787-4e9b-4285-b175-63008b7e39de\") " Oct 11 10:55:39.233370 master-2 kubenswrapper[4776]: I1011 10:55:39.233269 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e99b787-4e9b-4285-b175-63008b7e39de-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:55:39.234078 master-2 kubenswrapper[4776]: I1011 10:55:39.233998 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e99b787-4e9b-4285-b175-63008b7e39de-logs" (OuterVolumeSpecName: "logs") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:39.236150 master-2 kubenswrapper[4776]: I1011 10:55:39.236078 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:39.236228 master-2 kubenswrapper[4776]: I1011 10:55:39.236178 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-scripts" (OuterVolumeSpecName: "scripts") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:39.237234 master-2 kubenswrapper[4776]: I1011 10:55:39.237201 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e99b787-4e9b-4285-b175-63008b7e39de-kube-api-access-cqgnt" (OuterVolumeSpecName: "kube-api-access-cqgnt") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "kube-api-access-cqgnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:39.269705 master-2 kubenswrapper[4776]: I1011 10:55:39.269282 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:39.283475 master-2 kubenswrapper[4776]: I1011 10:55:39.283400 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data" (OuterVolumeSpecName: "config-data") pod "7e99b787-4e9b-4285-b175-63008b7e39de" (UID: "7e99b787-4e9b-4285-b175-63008b7e39de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334308 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334356 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334368 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e99b787-4e9b-4285-b175-63008b7e39de-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334379 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334392 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqgnt\" (UniqueName: \"kubernetes.io/projected/7e99b787-4e9b-4285-b175-63008b7e39de-kube-api-access-cqgnt\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334406 4776 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e99b787-4e9b-4285-b175-63008b7e39de-config-data-custom\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.334492 master-2 kubenswrapper[4776]: I1011 10:55:39.334417 4776 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e99b787-4e9b-4285-b175-63008b7e39de-etc-machine-id\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:39.432904 master-2 kubenswrapper[4776]: I1011 10:55:39.432725 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:55:39.433156 master-2 kubenswrapper[4776]: I1011 10:55:39.433028 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-external-api-2" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-log" containerID="cri-o://2d269532330f7a891baad11ad939ef35677256aa7bb563b56736833615edcf28" gracePeriod=30 Oct 11 10:55:39.433234 master-2 kubenswrapper[4776]: I1011 10:55:39.433177 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-external-api-2" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-httpd" containerID="cri-o://5770704138c0c2f874908ca2cc1d2acaea03506574846327d58cb886820df9bf" gracePeriod=30 Oct 11 10:55:39.717917 master-2 kubenswrapper[4776]: I1011 10:55:39.717766 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.720714 master-2 kubenswrapper[4776]: I1011 10:55:39.717766 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"7e99b787-4e9b-4285-b175-63008b7e39de","Type":"ContainerDied","Data":"3fa25e1670b8b5d447679f8aa5304db6a9e223ba597842d1098f20a3ef6c774c"} Oct 11 10:55:39.720714 master-2 kubenswrapper[4776]: I1011 10:55:39.719967 4776 scope.go:117] "RemoveContainer" containerID="d2f58a8e0319242a1b64a2af94772f7b9698c1eaa7f642a654e5e11cc2fe7f89" Oct 11 10:55:39.729392 master-2 kubenswrapper[4776]: I1011 10:55:39.729349 4776 generic.go:334] "Generic (PLEG): container finished" podID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerID="2d269532330f7a891baad11ad939ef35677256aa7bb563b56736833615edcf28" exitCode=143 Oct 11 10:55:39.729559 master-2 kubenswrapper[4776]: I1011 10:55:39.729399 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"a0831eef-0c2e-4d09-a44b-7276f30bc1cf","Type":"ContainerDied","Data":"2d269532330f7a891baad11ad939ef35677256aa7bb563b56736833615edcf28"} Oct 11 10:55:39.744320 master-2 kubenswrapper[4776]: I1011 10:55:39.744288 4776 scope.go:117] "RemoveContainer" containerID="0547089e6afc3eb60691c0ebbe3c41ee9104f9a77e690771e776160cdc0930fb" Oct 11 10:55:39.760822 master-2 kubenswrapper[4776]: I1011 10:55:39.760746 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:39.770038 master-2 kubenswrapper[4776]: I1011 10:55:39.769954 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:39.810364 master-2 kubenswrapper[4776]: I1011 10:55:39.810297 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:39.811366 master-2 kubenswrapper[4776]: E1011 10:55:39.811323 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-api" Oct 11 10:55:39.811366 master-2 kubenswrapper[4776]: I1011 10:55:39.811358 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-api" Oct 11 10:55:39.811500 master-2 kubenswrapper[4776]: E1011 10:55:39.811389 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-b5802-api-log" Oct 11 10:55:39.811500 master-2 kubenswrapper[4776]: I1011 10:55:39.811398 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-b5802-api-log" Oct 11 10:55:39.811660 master-2 kubenswrapper[4776]: I1011 10:55:39.811604 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-api" Oct 11 10:55:39.811748 master-2 kubenswrapper[4776]: I1011 10:55:39.811671 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" containerName="cinder-b5802-api-log" Oct 11 10:55:39.814187 master-2 kubenswrapper[4776]: I1011 10:55:39.813730 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.817775 master-2 kubenswrapper[4776]: I1011 10:55:39.817733 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 11 10:55:39.818036 master-2 kubenswrapper[4776]: I1011 10:55:39.817780 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b5802-api-config-data" Oct 11 10:55:39.818775 master-2 kubenswrapper[4776]: I1011 10:55:39.818745 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 11 10:55:39.821271 master-2 kubenswrapper[4776]: I1011 10:55:39.821233 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:39.949089 master-2 kubenswrapper[4776]: I1011 10:55:39.949027 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-internal-tls-certs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949325 master-2 kubenswrapper[4776]: I1011 10:55:39.949103 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-etc-machine-id\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949325 master-2 kubenswrapper[4776]: I1011 10:55:39.949153 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-scripts\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949325 master-2 kubenswrapper[4776]: I1011 10:55:39.949182 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-logs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949325 master-2 kubenswrapper[4776]: I1011 10:55:39.949208 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-config-data\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949325 master-2 kubenswrapper[4776]: I1011 10:55:39.949296 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-combined-ca-bundle\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949325 master-2 kubenswrapper[4776]: I1011 10:55:39.949320 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbls\" (UniqueName: \"kubernetes.io/projected/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-kube-api-access-7hbls\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949594 master-2 kubenswrapper[4776]: I1011 10:55:39.949354 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-public-tls-certs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:39.949594 master-2 kubenswrapper[4776]: I1011 10:55:39.949415 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-config-data-custom\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051529 master-2 kubenswrapper[4776]: I1011 10:55:40.051394 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-config-data-custom\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051529 master-2 kubenswrapper[4776]: I1011 10:55:40.051466 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-internal-tls-certs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051529 master-2 kubenswrapper[4776]: I1011 10:55:40.051502 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-etc-machine-id\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051841 master-2 kubenswrapper[4776]: I1011 10:55:40.051542 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-scripts\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051841 master-2 kubenswrapper[4776]: I1011 10:55:40.051560 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-logs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051841 master-2 kubenswrapper[4776]: I1011 10:55:40.051577 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-config-data\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051841 master-2 kubenswrapper[4776]: I1011 10:55:40.051641 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hbls\" (UniqueName: \"kubernetes.io/projected/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-kube-api-access-7hbls\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051841 master-2 kubenswrapper[4776]: I1011 10:55:40.051659 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-combined-ca-bundle\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.051841 master-2 kubenswrapper[4776]: I1011 10:55:40.051696 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-public-tls-certs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.052104 master-2 kubenswrapper[4776]: I1011 10:55:40.051894 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-etc-machine-id\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.052186 master-2 kubenswrapper[4776]: I1011 10:55:40.052144 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-logs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.055049 master-2 kubenswrapper[4776]: I1011 10:55:40.055007 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-config-data-custom\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.055303 master-2 kubenswrapper[4776]: I1011 10:55:40.055247 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-config-data\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.055549 master-2 kubenswrapper[4776]: I1011 10:55:40.055508 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-public-tls-certs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.055618 master-2 kubenswrapper[4776]: I1011 10:55:40.055599 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-internal-tls-certs\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.055846 master-2 kubenswrapper[4776]: I1011 10:55:40.055806 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-combined-ca-bundle\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.056397 master-2 kubenswrapper[4776]: I1011 10:55:40.056369 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-scripts\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.073113 master-2 kubenswrapper[4776]: I1011 10:55:40.072970 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e99b787-4e9b-4285-b175-63008b7e39de" path="/var/lib/kubelet/pods/7e99b787-4e9b-4285-b175-63008b7e39de/volumes" Oct 11 10:55:40.075686 master-2 kubenswrapper[4776]: I1011 10:55:40.075585 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hbls\" (UniqueName: \"kubernetes.io/projected/963227eb-c8af-4bdf-a4dd-ddf78e2d3d57-kube-api-access-7hbls\") pod \"cinder-b5802-api-2\" (UID: \"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57\") " pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.131602 master-2 kubenswrapper[4776]: I1011 10:55:40.131538 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:40.656951 master-2 kubenswrapper[4776]: I1011 10:55:40.656885 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b5802-api-2"] Oct 11 10:55:40.746184 master-2 kubenswrapper[4776]: I1011 10:55:40.746130 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57","Type":"ContainerStarted","Data":"1339fa35aee16501eb487fbb01f91540a50ece578bb3e75a1be09cc263132c60"} Oct 11 10:55:41.418070 master-2 kubenswrapper[4776]: I1011 10:55:41.417888 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-748bbfcf89-tr8n2"] Oct 11 10:55:41.419806 master-2 kubenswrapper[4776]: I1011 10:55:41.419763 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.427996 master-2 kubenswrapper[4776]: I1011 10:55:41.427945 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 11 10:55:41.428195 master-2 kubenswrapper[4776]: I1011 10:55:41.428098 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 11 10:55:41.449467 master-2 kubenswrapper[4776]: I1011 10:55:41.449420 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-748bbfcf89-tr8n2"] Oct 11 10:55:41.483169 master-2 kubenswrapper[4776]: I1011 10:55:41.483096 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-public-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.483475 master-2 kubenswrapper[4776]: I1011 10:55:41.483189 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-httpd-config\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.483475 master-2 kubenswrapper[4776]: I1011 10:55:41.483261 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-combined-ca-bundle\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.483475 master-2 kubenswrapper[4776]: I1011 10:55:41.483309 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-config\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.483475 master-2 kubenswrapper[4776]: I1011 10:55:41.483338 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rm8h\" (UniqueName: \"kubernetes.io/projected/7e2da74b-d7e3-45c1-8c4b-e01415113c95-kube-api-access-9rm8h\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.483475 master-2 kubenswrapper[4776]: I1011 10:55:41.483368 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-internal-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.484093 master-2 kubenswrapper[4776]: I1011 10:55:41.484057 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-ovndb-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.585769 master-2 kubenswrapper[4776]: I1011 10:55:41.585655 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-config\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.585769 master-2 kubenswrapper[4776]: I1011 10:55:41.585738 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rm8h\" (UniqueName: \"kubernetes.io/projected/7e2da74b-d7e3-45c1-8c4b-e01415113c95-kube-api-access-9rm8h\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.585769 master-2 kubenswrapper[4776]: I1011 10:55:41.585774 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-internal-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.589738 master-2 kubenswrapper[4776]: I1011 10:55:41.589490 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-config\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.590130 master-2 kubenswrapper[4776]: I1011 10:55:41.589762 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-internal-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.592798 master-2 kubenswrapper[4776]: I1011 10:55:41.592766 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-ovndb-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.592923 master-2 kubenswrapper[4776]: I1011 10:55:41.592875 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-public-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.593005 master-2 kubenswrapper[4776]: I1011 10:55:41.592975 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-httpd-config\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.593103 master-2 kubenswrapper[4776]: I1011 10:55:41.593038 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-combined-ca-bundle\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.596455 master-2 kubenswrapper[4776]: I1011 10:55:41.596419 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-ovndb-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.598118 master-2 kubenswrapper[4776]: I1011 10:55:41.598088 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-public-tls-certs\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.603373 master-2 kubenswrapper[4776]: I1011 10:55:41.603062 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-httpd-config\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.607947 master-2 kubenswrapper[4776]: I1011 10:55:41.607804 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2da74b-d7e3-45c1-8c4b-e01415113c95-combined-ca-bundle\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.639101 master-2 kubenswrapper[4776]: I1011 10:55:41.632926 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rm8h\" (UniqueName: \"kubernetes.io/projected/7e2da74b-d7e3-45c1-8c4b-e01415113c95-kube-api-access-9rm8h\") pod \"neutron-748bbfcf89-tr8n2\" (UID: \"7e2da74b-d7e3-45c1-8c4b-e01415113c95\") " pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:41.758500 master-2 kubenswrapper[4776]: I1011 10:55:41.758325 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57","Type":"ContainerStarted","Data":"93db0b09e54cde1d1adab74a0f17b88c125781b5a4724d40de0780d77dfb38bd"} Oct 11 10:55:41.762322 master-2 kubenswrapper[4776]: I1011 10:55:41.762272 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:42.613343 master-2 kubenswrapper[4776]: I1011 10:55:42.612726 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-748bbfcf89-tr8n2"] Oct 11 10:55:42.665631 master-2 kubenswrapper[4776]: W1011 10:55:42.665543 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e2da74b_d7e3_45c1_8c4b_e01415113c95.slice/crio-28147c8ed528fe882308c9988a4413d5a0a1b32f150683ba8de265e037228a3d WatchSource:0}: Error finding container 28147c8ed528fe882308c9988a4413d5a0a1b32f150683ba8de265e037228a3d: Status 404 returned error can't find the container with id 28147c8ed528fe882308c9988a4413d5a0a1b32f150683ba8de265e037228a3d Oct 11 10:55:42.808402 master-2 kubenswrapper[4776]: I1011 10:55:42.808343 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b5802-api-2" event={"ID":"963227eb-c8af-4bdf-a4dd-ddf78e2d3d57","Type":"ContainerStarted","Data":"3f9f9b89aea4daee0c9203facbf87eef99f5f5e9ba646dcd06979bc6a462f337"} Oct 11 10:55:42.808402 master-2 kubenswrapper[4776]: I1011 10:55:42.808415 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:42.815707 master-2 kubenswrapper[4776]: I1011 10:55:42.815629 4776 generic.go:334] "Generic (PLEG): container finished" podID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerID="5770704138c0c2f874908ca2cc1d2acaea03506574846327d58cb886820df9bf" exitCode=0 Oct 11 10:55:42.815904 master-2 kubenswrapper[4776]: I1011 10:55:42.815734 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"a0831eef-0c2e-4d09-a44b-7276f30bc1cf","Type":"ContainerDied","Data":"5770704138c0c2f874908ca2cc1d2acaea03506574846327d58cb886820df9bf"} Oct 11 10:55:42.817218 master-2 kubenswrapper[4776]: I1011 10:55:42.817182 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-748bbfcf89-tr8n2" event={"ID":"7e2da74b-d7e3-45c1-8c4b-e01415113c95","Type":"ContainerStarted","Data":"28147c8ed528fe882308c9988a4413d5a0a1b32f150683ba8de265e037228a3d"} Oct 11 10:55:42.847520 master-2 kubenswrapper[4776]: I1011 10:55:42.847375 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b5802-api-2" podStartSLOduration=3.847352848 podStartE2EDuration="3.847352848s" podCreationTimestamp="2025-10-11 10:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:42.838434466 +0000 UTC m=+1777.622861175" watchObservedRunningTime="2025-10-11 10:55:42.847352848 +0000 UTC m=+1777.631779557" Oct 11 10:55:43.797253 master-2 kubenswrapper[4776]: I1011 10:55:43.796860 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:43.832935 master-2 kubenswrapper[4776]: I1011 10:55:43.832849 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"a0831eef-0c2e-4d09-a44b-7276f30bc1cf","Type":"ContainerDied","Data":"5a48d5bbfd49d56d4f32777007bb97dc3fb7108ea65533008682389d26fd8acc"} Oct 11 10:55:43.833488 master-2 kubenswrapper[4776]: I1011 10:55:43.832965 4776 scope.go:117] "RemoveContainer" containerID="5770704138c0c2f874908ca2cc1d2acaea03506574846327d58cb886820df9bf" Oct 11 10:55:43.833488 master-2 kubenswrapper[4776]: I1011 10:55:43.833154 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:43.839073 master-2 kubenswrapper[4776]: I1011 10:55:43.839038 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-748bbfcf89-tr8n2" event={"ID":"7e2da74b-d7e3-45c1-8c4b-e01415113c95","Type":"ContainerStarted","Data":"d87c91a1d2ccc6e15d38a410528e6d89dce602fb8e82c0e98233a0939f6cd7c3"} Oct 11 10:55:43.839567 master-2 kubenswrapper[4776]: I1011 10:55:43.839550 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-748bbfcf89-tr8n2" event={"ID":"7e2da74b-d7e3-45c1-8c4b-e01415113c95","Type":"ContainerStarted","Data":"ec47c17fd343b5bb68562630fde40a9c43e8e9ad6d725893f890bf8aace1f28c"} Oct 11 10:55:43.839644 master-2 kubenswrapper[4776]: I1011 10:55:43.839633 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:55:43.865825 master-2 kubenswrapper[4776]: I1011 10:55:43.865787 4776 scope.go:117] "RemoveContainer" containerID="2d269532330f7a891baad11ad939ef35677256aa7bb563b56736833615edcf28" Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873184 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-config-data\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873265 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-logs\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873356 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-combined-ca-bundle\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873476 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-public-tls-certs\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873615 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873650 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-httpd-run\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873706 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-scripts\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.873743 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f984s\" (UniqueName: \"kubernetes.io/projected/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-kube-api-access-f984s\") pod \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\" (UID: \"a0831eef-0c2e-4d09-a44b-7276f30bc1cf\") " Oct 11 10:55:43.876719 master-2 kubenswrapper[4776]: I1011 10:55:43.876069 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:43.880907 master-2 kubenswrapper[4776]: I1011 10:55:43.880584 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-logs" (OuterVolumeSpecName: "logs") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:55:43.881339 master-2 kubenswrapper[4776]: I1011 10:55:43.881308 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-kube-api-access-f984s" (OuterVolumeSpecName: "kube-api-access-f984s") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "kube-api-access-f984s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:43.911766 master-2 kubenswrapper[4776]: I1011 10:55:43.907403 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007" (OuterVolumeSpecName: "glance") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "pvc-96ecbc97-5be5-45e4-8942-00605756b89a". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:55:43.911766 master-2 kubenswrapper[4776]: I1011 10:55:43.907791 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-scripts" (OuterVolumeSpecName: "scripts") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:43.919480 master-2 kubenswrapper[4776]: I1011 10:55:43.919320 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:43.937705 master-2 kubenswrapper[4776]: I1011 10:55:43.933637 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-748bbfcf89-tr8n2" podStartSLOduration=2.933618899 podStartE2EDuration="2.933618899s" podCreationTimestamp="2025-10-11 10:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:43.929522837 +0000 UTC m=+1778.713949546" watchObservedRunningTime="2025-10-11 10:55:43.933618899 +0000 UTC m=+1778.718045608" Oct 11 10:55:43.951702 master-2 kubenswrapper[4776]: I1011 10:55:43.948995 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-config-data" (OuterVolumeSpecName: "config-data") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:43.955697 master-2 kubenswrapper[4776]: I1011 10:55:43.955538 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a0831eef-0c2e-4d09-a44b-7276f30bc1cf" (UID: "a0831eef-0c2e-4d09-a44b-7276f30bc1cf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:43.978143 master-2 kubenswrapper[4776]: I1011 10:55:43.978090 4776 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-public-tls-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978150 4776 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") on node \"master-2\" " Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978168 4776 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-httpd-run\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978182 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978225 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f984s\" (UniqueName: \"kubernetes.io/projected/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-kube-api-access-f984s\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978237 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978253 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:43.978318 master-2 kubenswrapper[4776]: I1011 10:55:43.978265 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0831eef-0c2e-4d09-a44b-7276f30bc1cf-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:44.004621 master-2 kubenswrapper[4776]: I1011 10:55:44.004563 4776 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:55:44.004869 master-2 kubenswrapper[4776]: I1011 10:55:44.004762 4776 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-96ecbc97-5be5-45e4-8942-00605756b89a" (UniqueName: "kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007") on node "master-2" Oct 11 10:55:44.080106 master-2 kubenswrapper[4776]: I1011 10:55:44.080039 4776 reconciler_common.go:293] "Volume detached for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:44.182820 master-2 kubenswrapper[4776]: I1011 10:55:44.182761 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:55:44.193402 master-2 kubenswrapper[4776]: I1011 10:55:44.193351 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: I1011 10:55:44.238497 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: E1011 10:55:44.239230 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-log" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: I1011 10:55:44.239248 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-log" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: E1011 10:55:44.239265 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-httpd" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: I1011 10:55:44.239271 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-httpd" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: I1011 10:55:44.239480 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-httpd" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: I1011 10:55:44.239502 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" containerName="glance-log" Oct 11 10:55:44.241795 master-2 kubenswrapper[4776]: I1011 10:55:44.240564 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.247804 master-2 kubenswrapper[4776]: I1011 10:55:44.244182 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-external-config-data" Oct 11 10:55:44.247804 master-2 kubenswrapper[4776]: I1011 10:55:44.244880 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 11 10:55:44.256004 master-2 kubenswrapper[4776]: I1011 10:55:44.255947 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283087 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-combined-ca-bundle\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283161 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283240 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3afb9e92-33b4-4cbf-8857-de31fa326a7a-httpd-run\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283281 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-public-tls-certs\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283376 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h98d\" (UniqueName: \"kubernetes.io/projected/3afb9e92-33b4-4cbf-8857-de31fa326a7a-kube-api-access-9h98d\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283424 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-scripts\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283474 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-config-data\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.285739 master-2 kubenswrapper[4776]: I1011 10:55:44.283509 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afb9e92-33b4-4cbf-8857-de31fa326a7a-logs\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385356 master-2 kubenswrapper[4776]: I1011 10:55:44.385297 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-public-tls-certs\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385397 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h98d\" (UniqueName: \"kubernetes.io/projected/3afb9e92-33b4-4cbf-8857-de31fa326a7a-kube-api-access-9h98d\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385432 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-scripts\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385463 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-config-data\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385485 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afb9e92-33b4-4cbf-8857-de31fa326a7a-logs\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385514 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-combined-ca-bundle\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385542 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.385612 master-2 kubenswrapper[4776]: I1011 10:55:44.385584 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3afb9e92-33b4-4cbf-8857-de31fa326a7a-httpd-run\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.386123 master-2 kubenswrapper[4776]: I1011 10:55:44.386079 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afb9e92-33b4-4cbf-8857-de31fa326a7a-logs\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.386206 master-2 kubenswrapper[4776]: I1011 10:55:44.386161 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3afb9e92-33b4-4cbf-8857-de31fa326a7a-httpd-run\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.387911 master-2 kubenswrapper[4776]: I1011 10:55:44.387870 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:55:44.387911 master-2 kubenswrapper[4776]: I1011 10:55:44.387897 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/977628254c2695ff17425dccc1fbe376fb7c4f4d8dfcfd87eb3a48ca9779afa1/globalmount\"" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.389407 master-2 kubenswrapper[4776]: I1011 10:55:44.389359 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-combined-ca-bundle\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.391636 master-2 kubenswrapper[4776]: I1011 10:55:44.391580 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-scripts\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.393095 master-2 kubenswrapper[4776]: I1011 10:55:44.393047 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-config-data\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.394542 master-2 kubenswrapper[4776]: I1011 10:55:44.394495 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3afb9e92-33b4-4cbf-8857-de31fa326a7a-public-tls-certs\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:44.556894 master-2 kubenswrapper[4776]: I1011 10:55:44.555314 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h98d\" (UniqueName: \"kubernetes.io/projected/3afb9e92-33b4-4cbf-8857-de31fa326a7a-kube-api-access-9h98d\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:45.271955 master-2 kubenswrapper[4776]: I1011 10:55:45.271864 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-96ecbc97-5be5-45e4-8942-00605756b89a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8c99b103-bf4e-4fc0-a1a5-344e177df007\") pod \"glance-b5802-default-external-api-2\" (UID: \"3afb9e92-33b4-4cbf-8857-de31fa326a7a\") " pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:45.466457 master-2 kubenswrapper[4776]: I1011 10:55:45.466398 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:45.754426 master-2 kubenswrapper[4776]: I1011 10:55:45.754107 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-854549b758-grfk2" Oct 11 10:55:46.073076 master-2 kubenswrapper[4776]: I1011 10:55:46.072923 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0831eef-0c2e-4d09-a44b-7276f30bc1cf" path="/var/lib/kubelet/pods/a0831eef-0c2e-4d09-a44b-7276f30bc1cf/volumes" Oct 11 10:55:46.302008 master-2 kubenswrapper[4776]: I1011 10:55:46.301915 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-86bdd47775-gpz8z"] Oct 11 10:55:46.302848 master-2 kubenswrapper[4776]: I1011 10:55:46.302285 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-86bdd47775-gpz8z" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerName="heat-engine" containerID="cri-o://d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" gracePeriod=60 Oct 11 10:55:46.335370 master-2 kubenswrapper[4776]: E1011 10:55:46.335097 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 11 10:55:46.345590 master-2 kubenswrapper[4776]: E1011 10:55:46.345511 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 11 10:55:46.348703 master-2 kubenswrapper[4776]: E1011 10:55:46.347414 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 11 10:55:46.348703 master-2 kubenswrapper[4776]: E1011 10:55:46.347456 4776 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-86bdd47775-gpz8z" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerName="heat-engine" Oct 11 10:55:46.689848 master-2 kubenswrapper[4776]: I1011 10:55:46.689790 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-external-api-2"] Oct 11 10:55:46.909813 master-2 kubenswrapper[4776]: I1011 10:55:46.909750 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"3afb9e92-33b4-4cbf-8857-de31fa326a7a","Type":"ContainerStarted","Data":"216f96730d54b66928bbd66a1d8ed9534f53af6a012174f7434f249334d47fb5"} Oct 11 10:55:47.313802 master-2 kubenswrapper[4776]: I1011 10:55:47.313701 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-faff-account-create-nc9gw"] Oct 11 10:55:47.315442 master-2 kubenswrapper[4776]: I1011 10:55:47.315397 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:47.333788 master-2 kubenswrapper[4776]: I1011 10:55:47.323498 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 11 10:55:47.350368 master-2 kubenswrapper[4776]: I1011 10:55:47.350069 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-4tcwj"] Oct 11 10:55:47.350572 master-2 kubenswrapper[4776]: I1011 10:55:47.350372 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="dnsmasq-dns" containerID="cri-o://459c94f48331f93d40730495be1474d6c1c89c8df43275f7b4ad519ea521cb3d" gracePeriod=10 Oct 11 10:55:47.362744 master-2 kubenswrapper[4776]: I1011 10:55:47.362474 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-faff-account-create-nc9gw"] Oct 11 10:55:47.451650 master-2 kubenswrapper[4776]: I1011 10:55:47.451600 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l224\" (UniqueName: \"kubernetes.io/projected/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b-kube-api-access-5l224\") pod \"nova-cell1-faff-account-create-nc9gw\" (UID: \"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b\") " pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:47.556049 master-2 kubenswrapper[4776]: I1011 10:55:47.554722 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l224\" (UniqueName: \"kubernetes.io/projected/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b-kube-api-access-5l224\") pod \"nova-cell1-faff-account-create-nc9gw\" (UID: \"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b\") " pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:47.784451 master-2 kubenswrapper[4776]: I1011 10:55:47.784404 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l224\" (UniqueName: \"kubernetes.io/projected/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b-kube-api-access-5l224\") pod \"nova-cell1-faff-account-create-nc9gw\" (UID: \"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b\") " pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:47.941004 master-2 kubenswrapper[4776]: I1011 10:55:47.940955 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"3afb9e92-33b4-4cbf-8857-de31fa326a7a","Type":"ContainerStarted","Data":"d0b6a6a7c2caccf14853e2a4c52cdb3f6c8df48048d7259941eaf69c074c04ff"} Oct 11 10:55:47.944528 master-2 kubenswrapper[4776]: I1011 10:55:47.944494 4776 generic.go:334] "Generic (PLEG): container finished" podID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerID="459c94f48331f93d40730495be1474d6c1c89c8df43275f7b4ad519ea521cb3d" exitCode=0 Oct 11 10:55:47.944897 master-2 kubenswrapper[4776]: I1011 10:55:47.944549 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" event={"ID":"70447ad9-31f0-4f6a-8c40-19fbe8141ada","Type":"ContainerDied","Data":"459c94f48331f93d40730495be1474d6c1c89c8df43275f7b4ad519ea521cb3d"} Oct 11 10:55:48.055324 master-2 kubenswrapper[4776]: I1011 10:55:48.055200 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:48.463197 master-2 kubenswrapper[4776]: I1011 10:55:48.463138 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:55:48.582958 master-2 kubenswrapper[4776]: I1011 10:55:48.582870 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-svc\") pod \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " Oct 11 10:55:48.583335 master-2 kubenswrapper[4776]: I1011 10:55:48.583013 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znxxj\" (UniqueName: \"kubernetes.io/projected/70447ad9-31f0-4f6a-8c40-19fbe8141ada-kube-api-access-znxxj\") pod \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " Oct 11 10:55:48.583335 master-2 kubenswrapper[4776]: I1011 10:55:48.583072 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-config\") pod \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " Oct 11 10:55:48.583335 master-2 kubenswrapper[4776]: I1011 10:55:48.583110 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-swift-storage-0\") pod \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " Oct 11 10:55:48.583335 master-2 kubenswrapper[4776]: I1011 10:55:48.583233 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-nb\") pod \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " Oct 11 10:55:48.583335 master-2 kubenswrapper[4776]: I1011 10:55:48.583288 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-sb\") pod \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\" (UID: \"70447ad9-31f0-4f6a-8c40-19fbe8141ada\") " Oct 11 10:55:48.591792 master-2 kubenswrapper[4776]: I1011 10:55:48.591708 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70447ad9-31f0-4f6a-8c40-19fbe8141ada-kube-api-access-znxxj" (OuterVolumeSpecName: "kube-api-access-znxxj") pod "70447ad9-31f0-4f6a-8c40-19fbe8141ada" (UID: "70447ad9-31f0-4f6a-8c40-19fbe8141ada"). InnerVolumeSpecName "kube-api-access-znxxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:48.628372 master-2 kubenswrapper[4776]: I1011 10:55:48.628310 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70447ad9-31f0-4f6a-8c40-19fbe8141ada" (UID: "70447ad9-31f0-4f6a-8c40-19fbe8141ada"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:48.628601 master-2 kubenswrapper[4776]: I1011 10:55:48.628482 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70447ad9-31f0-4f6a-8c40-19fbe8141ada" (UID: "70447ad9-31f0-4f6a-8c40-19fbe8141ada"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:48.629131 master-2 kubenswrapper[4776]: I1011 10:55:48.629105 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-config" (OuterVolumeSpecName: "config") pod "70447ad9-31f0-4f6a-8c40-19fbe8141ada" (UID: "70447ad9-31f0-4f6a-8c40-19fbe8141ada"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:48.635214 master-2 kubenswrapper[4776]: I1011 10:55:48.635095 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "70447ad9-31f0-4f6a-8c40-19fbe8141ada" (UID: "70447ad9-31f0-4f6a-8c40-19fbe8141ada"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:48.637150 master-2 kubenswrapper[4776]: I1011 10:55:48.637124 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70447ad9-31f0-4f6a-8c40-19fbe8141ada" (UID: "70447ad9-31f0-4f6a-8c40-19fbe8141ada"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:55:48.701530 master-2 kubenswrapper[4776]: I1011 10:55:48.699298 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:48.701530 master-2 kubenswrapper[4776]: I1011 10:55:48.699340 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znxxj\" (UniqueName: \"kubernetes.io/projected/70447ad9-31f0-4f6a-8c40-19fbe8141ada-kube-api-access-znxxj\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:48.701530 master-2 kubenswrapper[4776]: I1011 10:55:48.699352 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:48.701530 master-2 kubenswrapper[4776]: I1011 10:55:48.699360 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:48.701530 master-2 kubenswrapper[4776]: I1011 10:55:48.699369 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:48.701530 master-2 kubenswrapper[4776]: I1011 10:55:48.699378 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70447ad9-31f0-4f6a-8c40-19fbe8141ada-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:48.944291 master-2 kubenswrapper[4776]: W1011 10:55:48.943866 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe6ef0ad_fb25_4af2_a9fc_c89be4b1983b.slice/crio-6421511dcf0ab89c9e53d65d6b14be5caf661ab4c5ff703f9d8dfafd5a15ebfd WatchSource:0}: Error finding container 6421511dcf0ab89c9e53d65d6b14be5caf661ab4c5ff703f9d8dfafd5a15ebfd: Status 404 returned error can't find the container with id 6421511dcf0ab89c9e53d65d6b14be5caf661ab4c5ff703f9d8dfafd5a15ebfd Oct 11 10:55:48.958087 master-2 kubenswrapper[4776]: I1011 10:55:48.958049 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-faff-account-create-nc9gw" event={"ID":"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b","Type":"ContainerStarted","Data":"6421511dcf0ab89c9e53d65d6b14be5caf661ab4c5ff703f9d8dfafd5a15ebfd"} Oct 11 10:55:48.959943 master-2 kubenswrapper[4776]: I1011 10:55:48.959868 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-external-api-2" event={"ID":"3afb9e92-33b4-4cbf-8857-de31fa326a7a","Type":"ContainerStarted","Data":"e706f6ed99469eec9e6d7866abb1c193cc8d11c82b1dff538542449bdc25d671"} Oct 11 10:55:48.963929 master-2 kubenswrapper[4776]: I1011 10:55:48.963876 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" event={"ID":"70447ad9-31f0-4f6a-8c40-19fbe8141ada","Type":"ContainerDied","Data":"deced8507734b7d702b6a986cf9629954a10ef889b4d591c0c86c8d9b9826ad7"} Oct 11 10:55:48.963929 master-2 kubenswrapper[4776]: I1011 10:55:48.963922 4776 scope.go:117] "RemoveContainer" containerID="459c94f48331f93d40730495be1474d6c1c89c8df43275f7b4ad519ea521cb3d" Oct 11 10:55:48.964362 master-2 kubenswrapper[4776]: I1011 10:55:48.964022 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" Oct 11 10:55:49.020912 master-2 kubenswrapper[4776]: I1011 10:55:49.020879 4776 scope.go:117] "RemoveContainer" containerID="304ada663003ba027290aa5f510ba1f9e62024cd530b437aab6c3371a94b50d9" Oct 11 10:55:49.158197 master-2 kubenswrapper[4776]: I1011 10:55:49.158141 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-faff-account-create-nc9gw"] Oct 11 10:55:49.423240 master-2 kubenswrapper[4776]: I1011 10:55:49.423033 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-external-api-2" podStartSLOduration=5.423017016 podStartE2EDuration="5.423017016s" podCreationTimestamp="2025-10-11 10:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:55:49.420374725 +0000 UTC m=+1784.204801424" watchObservedRunningTime="2025-10-11 10:55:49.423017016 +0000 UTC m=+1784.207443725" Oct 11 10:55:49.671588 master-2 kubenswrapper[4776]: I1011 10:55:49.671528 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-4tcwj"] Oct 11 10:55:49.745658 master-2 kubenswrapper[4776]: I1011 10:55:49.745596 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-595686b98f-4tcwj"] Oct 11 10:55:49.982348 master-2 kubenswrapper[4776]: I1011 10:55:49.982220 4776 generic.go:334] "Generic (PLEG): container finished" podID="be6ef0ad-fb25-4af2-a9fc-c89be4b1983b" containerID="b466abf7a0b4707a238b9568c5f5c7ad243418122b1d4aff19889a45820a6369" exitCode=0 Oct 11 10:55:49.982348 master-2 kubenswrapper[4776]: I1011 10:55:49.982318 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-faff-account-create-nc9gw" event={"ID":"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b","Type":"ContainerDied","Data":"b466abf7a0b4707a238b9568c5f5c7ad243418122b1d4aff19889a45820a6369"} Oct 11 10:55:50.077192 master-2 kubenswrapper[4776]: I1011 10:55:50.077123 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" path="/var/lib/kubelet/pods/70447ad9-31f0-4f6a-8c40-19fbe8141ada/volumes" Oct 11 10:55:51.923904 master-2 kubenswrapper[4776]: I1011 10:55:51.923850 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:51.965029 master-2 kubenswrapper[4776]: I1011 10:55:51.964984 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l224\" (UniqueName: \"kubernetes.io/projected/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b-kube-api-access-5l224\") pod \"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b\" (UID: \"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b\") " Oct 11 10:55:51.968285 master-2 kubenswrapper[4776]: I1011 10:55:51.968240 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b-kube-api-access-5l224" (OuterVolumeSpecName: "kube-api-access-5l224") pod "be6ef0ad-fb25-4af2-a9fc-c89be4b1983b" (UID: "be6ef0ad-fb25-4af2-a9fc-c89be4b1983b"). InnerVolumeSpecName "kube-api-access-5l224". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:51.998508 master-2 kubenswrapper[4776]: I1011 10:55:51.998448 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-faff-account-create-nc9gw" event={"ID":"be6ef0ad-fb25-4af2-a9fc-c89be4b1983b","Type":"ContainerDied","Data":"6421511dcf0ab89c9e53d65d6b14be5caf661ab4c5ff703f9d8dfafd5a15ebfd"} Oct 11 10:55:51.998508 master-2 kubenswrapper[4776]: I1011 10:55:51.998507 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6421511dcf0ab89c9e53d65d6b14be5caf661ab4c5ff703f9d8dfafd5a15ebfd" Oct 11 10:55:51.998639 master-2 kubenswrapper[4776]: I1011 10:55:51.998514 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-faff-account-create-nc9gw" Oct 11 10:55:52.041217 master-2 kubenswrapper[4776]: I1011 10:55:52.041164 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-b5802-api-2" Oct 11 10:55:52.069185 master-2 kubenswrapper[4776]: I1011 10:55:52.069097 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l224\" (UniqueName: \"kubernetes.io/projected/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b-kube-api-access-5l224\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:53.443859 master-2 kubenswrapper[4776]: I1011 10:55:53.443786 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-595686b98f-4tcwj" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.143:5353: i/o timeout" Oct 11 10:55:55.468266 master-2 kubenswrapper[4776]: I1011 10:55:55.468204 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:55.468266 master-2 kubenswrapper[4776]: I1011 10:55:55.468261 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:55.504728 master-2 kubenswrapper[4776]: I1011 10:55:55.504347 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:55.509332 master-2 kubenswrapper[4776]: I1011 10:55:55.509276 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:56.045072 master-2 kubenswrapper[4776]: I1011 10:55:56.044889 4776 generic.go:334] "Generic (PLEG): container finished" podID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" exitCode=0 Oct 11 10:55:56.045072 master-2 kubenswrapper[4776]: I1011 10:55:56.044980 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-86bdd47775-gpz8z" event={"ID":"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49","Type":"ContainerDied","Data":"d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c"} Oct 11 10:55:56.045378 master-2 kubenswrapper[4776]: I1011 10:55:56.045294 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:56.045378 master-2 kubenswrapper[4776]: I1011 10:55:56.045314 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:56.333793 master-2 kubenswrapper[4776]: E1011 10:55:56.333722 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c is running failed: container process not found" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 11 10:55:56.339255 master-2 kubenswrapper[4776]: E1011 10:55:56.339205 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c is running failed: container process not found" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 11 10:55:56.339758 master-2 kubenswrapper[4776]: E1011 10:55:56.339667 4776 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c is running failed: container process not found" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 11 10:55:56.339851 master-2 kubenswrapper[4776]: E1011 10:55:56.339759 4776 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-86bdd47775-gpz8z" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerName="heat-engine" Oct 11 10:55:57.057423 master-2 kubenswrapper[4776]: I1011 10:55:57.057325 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"ab41c4a67f100a55ec2db0cdabcc4ba00728b7ab8e01537dd56a71b0f51c8c16"} Oct 11 10:55:57.201589 master-2 kubenswrapper[4776]: I1011 10:55:57.199987 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:57.285931 master-2 kubenswrapper[4776]: I1011 10:55:57.285862 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-combined-ca-bundle\") pod \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " Oct 11 10:55:57.286100 master-2 kubenswrapper[4776]: I1011 10:55:57.285950 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data-custom\") pod \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " Oct 11 10:55:57.286100 master-2 kubenswrapper[4776]: I1011 10:55:57.286042 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data\") pod \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " Oct 11 10:55:57.286183 master-2 kubenswrapper[4776]: I1011 10:55:57.286153 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kschj\" (UniqueName: \"kubernetes.io/projected/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-kube-api-access-kschj\") pod \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\" (UID: \"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49\") " Oct 11 10:55:57.289108 master-2 kubenswrapper[4776]: I1011 10:55:57.289049 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-kube-api-access-kschj" (OuterVolumeSpecName: "kube-api-access-kschj") pod "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" (UID: "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49"). InnerVolumeSpecName "kube-api-access-kschj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:55:57.292065 master-2 kubenswrapper[4776]: I1011 10:55:57.292020 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" (UID: "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:57.315445 master-2 kubenswrapper[4776]: I1011 10:55:57.315328 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" (UID: "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:57.334276 master-2 kubenswrapper[4776]: I1011 10:55:57.334224 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data" (OuterVolumeSpecName: "config-data") pod "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" (UID: "fa8c4de6-1a61-4bea-9e87-0157bc5eeb49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:55:57.387824 master-2 kubenswrapper[4776]: I1011 10:55:57.387769 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kschj\" (UniqueName: \"kubernetes.io/projected/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-kube-api-access-kschj\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:57.387824 master-2 kubenswrapper[4776]: I1011 10:55:57.387810 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:57.387824 master-2 kubenswrapper[4776]: I1011 10:55:57.387820 4776 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data-custom\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:57.387824 master-2 kubenswrapper[4776]: I1011 10:55:57.387828 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:55:58.014007 master-2 kubenswrapper[4776]: I1011 10:55:58.013954 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:58.015758 master-2 kubenswrapper[4776]: I1011 10:55:58.015738 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-external-api-2" Oct 11 10:55:58.069915 master-2 kubenswrapper[4776]: I1011 10:55:58.069859 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-86bdd47775-gpz8z" Oct 11 10:55:58.078615 master-2 kubenswrapper[4776]: I1011 10:55:58.078540 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-86bdd47775-gpz8z" event={"ID":"fa8c4de6-1a61-4bea-9e87-0157bc5eeb49","Type":"ContainerDied","Data":"85ac756e7a970e46ccdb81506b9b8549165f9c0b853da21e24277ff1af233582"} Oct 11 10:55:58.078615 master-2 kubenswrapper[4776]: I1011 10:55:58.078598 4776 scope.go:117] "RemoveContainer" containerID="d9a713d23cd34e8d96c3ca875b6eba92bcfd3b88a3002d334e652f1999bce70c" Oct 11 10:55:58.321609 master-2 kubenswrapper[4776]: I1011 10:55:58.321477 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-86bdd47775-gpz8z"] Oct 11 10:55:58.377631 master-2 kubenswrapper[4776]: I1011 10:55:58.377544 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-86bdd47775-gpz8z"] Oct 11 10:56:00.068647 master-2 kubenswrapper[4776]: I1011 10:56:00.068575 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" path="/var/lib/kubelet/pods/fa8c4de6-1a61-4bea-9e87-0157bc5eeb49/volumes" Oct 11 10:56:01.101087 master-2 kubenswrapper[4776]: I1011 10:56:01.101026 4776 generic.go:334] "Generic (PLEG): container finished" podID="98ff7c8d-cc7c-4b25-917b-88dfa7f837c5" containerID="ab41c4a67f100a55ec2db0cdabcc4ba00728b7ab8e01537dd56a71b0f51c8c16" exitCode=0 Oct 11 10:56:01.101087 master-2 kubenswrapper[4776]: I1011 10:56:01.101070 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerDied","Data":"ab41c4a67f100a55ec2db0cdabcc4ba00728b7ab8e01537dd56a71b0f51c8c16"} Oct 11 10:56:07.172386 master-2 kubenswrapper[4776]: I1011 10:56:07.172272 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"23f0e7b89983da20c11b93039bc87953bf1c5b41a82bfad5304a8f7dfd94bc3f"} Oct 11 10:56:11.776588 master-2 kubenswrapper[4776]: I1011 10:56:11.776533 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-748bbfcf89-tr8n2" Oct 11 10:56:11.864025 master-2 kubenswrapper[4776]: I1011 10:56:11.862058 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7887b79bcd-4lcts"] Oct 11 10:56:11.864025 master-2 kubenswrapper[4776]: I1011 10:56:11.862642 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7887b79bcd-4lcts" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-api" containerID="cri-o://a8ba91b5b068d25618fa0c0a4315f75f1ab1929925bf68853e336e2a934b0ca6" gracePeriod=30 Oct 11 10:56:11.864025 master-2 kubenswrapper[4776]: I1011 10:56:11.863132 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7887b79bcd-4lcts" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-httpd" containerID="cri-o://2c24e9bb8cd753d61f312b08264059f0ef167df16dddaf2f79133f8b02212dea" gracePeriod=30 Oct 11 10:56:12.215926 master-2 kubenswrapper[4776]: I1011 10:56:12.215851 4776 generic.go:334] "Generic (PLEG): container finished" podID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerID="2c24e9bb8cd753d61f312b08264059f0ef167df16dddaf2f79133f8b02212dea" exitCode=0 Oct 11 10:56:12.215926 master-2 kubenswrapper[4776]: I1011 10:56:12.215910 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-4lcts" event={"ID":"60f1a3e8-20d2-48e9-842c-9312ce07efe0","Type":"ContainerDied","Data":"2c24e9bb8cd753d61f312b08264059f0ef167df16dddaf2f79133f8b02212dea"} Oct 11 10:56:20.181716 master-2 kubenswrapper[4776]: I1011 10:56:20.179744 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:56:20.181716 master-2 kubenswrapper[4776]: I1011 10:56:20.180114 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-0" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-log" containerID="cri-o://d313f81d0a8278bef4e01893ade947879d98007ed6a8ec60b3e2299775595e63" gracePeriod=30 Oct 11 10:56:20.181716 master-2 kubenswrapper[4776]: I1011 10:56:20.180189 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-b5802-default-internal-api-0" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-httpd" containerID="cri-o://3b4c4342002f1ca53ae36a5961557e4addd090f4ce6b5d57db4005435cbb0b8e" gracePeriod=30 Oct 11 10:56:20.314088 master-2 kubenswrapper[4776]: I1011 10:56:20.314031 4776 generic.go:334] "Generic (PLEG): container finished" podID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerID="a8ba91b5b068d25618fa0c0a4315f75f1ab1929925bf68853e336e2a934b0ca6" exitCode=0 Oct 11 10:56:20.314088 master-2 kubenswrapper[4776]: I1011 10:56:20.314093 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-4lcts" event={"ID":"60f1a3e8-20d2-48e9-842c-9312ce07efe0","Type":"ContainerDied","Data":"a8ba91b5b068d25618fa0c0a4315f75f1ab1929925bf68853e336e2a934b0ca6"} Oct 11 10:56:21.327436 master-2 kubenswrapper[4776]: I1011 10:56:21.327267 4776 generic.go:334] "Generic (PLEG): container finished" podID="7353cefe-e495-4633-9472-93497ca94612" containerID="d313f81d0a8278bef4e01893ade947879d98007ed6a8ec60b3e2299775595e63" exitCode=143 Oct 11 10:56:21.327436 master-2 kubenswrapper[4776]: I1011 10:56:21.327316 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"7353cefe-e495-4633-9472-93497ca94612","Type":"ContainerDied","Data":"d313f81d0a8278bef4e01893ade947879d98007ed6a8ec60b3e2299775595e63"} Oct 11 10:56:21.777651 master-2 kubenswrapper[4776]: I1011 10:56:21.777605 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:56:21.824244 master-2 kubenswrapper[4776]: I1011 10:56:21.824179 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-config\") pod \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " Oct 11 10:56:21.824491 master-2 kubenswrapper[4776]: I1011 10:56:21.824260 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfzck\" (UniqueName: \"kubernetes.io/projected/60f1a3e8-20d2-48e9-842c-9312ce07efe0-kube-api-access-kfzck\") pod \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " Oct 11 10:56:21.824491 master-2 kubenswrapper[4776]: I1011 10:56:21.824327 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-httpd-config\") pod \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " Oct 11 10:56:21.824491 master-2 kubenswrapper[4776]: I1011 10:56:21.824457 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-ovndb-tls-certs\") pod \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " Oct 11 10:56:21.824491 master-2 kubenswrapper[4776]: I1011 10:56:21.824482 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-combined-ca-bundle\") pod \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\" (UID: \"60f1a3e8-20d2-48e9-842c-9312ce07efe0\") " Oct 11 10:56:21.827519 master-2 kubenswrapper[4776]: I1011 10:56:21.827464 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f1a3e8-20d2-48e9-842c-9312ce07efe0-kube-api-access-kfzck" (OuterVolumeSpecName: "kube-api-access-kfzck") pod "60f1a3e8-20d2-48e9-842c-9312ce07efe0" (UID: "60f1a3e8-20d2-48e9-842c-9312ce07efe0"). InnerVolumeSpecName "kube-api-access-kfzck". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:21.828278 master-2 kubenswrapper[4776]: I1011 10:56:21.828212 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "60f1a3e8-20d2-48e9-842c-9312ce07efe0" (UID: "60f1a3e8-20d2-48e9-842c-9312ce07efe0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:21.874784 master-2 kubenswrapper[4776]: I1011 10:56:21.874725 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-config" (OuterVolumeSpecName: "config") pod "60f1a3e8-20d2-48e9-842c-9312ce07efe0" (UID: "60f1a3e8-20d2-48e9-842c-9312ce07efe0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:21.875282 master-2 kubenswrapper[4776]: I1011 10:56:21.875246 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60f1a3e8-20d2-48e9-842c-9312ce07efe0" (UID: "60f1a3e8-20d2-48e9-842c-9312ce07efe0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:21.892229 master-2 kubenswrapper[4776]: I1011 10:56:21.892178 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "60f1a3e8-20d2-48e9-842c-9312ce07efe0" (UID: "60f1a3e8-20d2-48e9-842c-9312ce07efe0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:21.926624 master-2 kubenswrapper[4776]: I1011 10:56:21.926572 4776 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-ovndb-tls-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:21.926624 master-2 kubenswrapper[4776]: I1011 10:56:21.926618 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:21.926767 master-2 kubenswrapper[4776]: I1011 10:56:21.926629 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:21.926767 master-2 kubenswrapper[4776]: I1011 10:56:21.926640 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfzck\" (UniqueName: \"kubernetes.io/projected/60f1a3e8-20d2-48e9-842c-9312ce07efe0-kube-api-access-kfzck\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:21.926767 master-2 kubenswrapper[4776]: I1011 10:56:21.926651 4776 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/60f1a3e8-20d2-48e9-842c-9312ce07efe0-httpd-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:22.040535 master-2 kubenswrapper[4776]: I1011 10:56:22.040457 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9pr8j"] Oct 11 10:56:22.041000 master-2 kubenswrapper[4776]: E1011 10:56:22.040918 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-httpd" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: I1011 10:56:22.041008 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-httpd" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: E1011 10:56:22.041026 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="dnsmasq-dns" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: I1011 10:56:22.041034 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="dnsmasq-dns" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: E1011 10:56:22.041051 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-api" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: I1011 10:56:22.041059 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-api" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: E1011 10:56:22.041072 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="init" Oct 11 10:56:22.041076 master-2 kubenswrapper[4776]: I1011 10:56:22.041080 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="init" Oct 11 10:56:22.041322 master-2 kubenswrapper[4776]: E1011 10:56:22.041101 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerName="heat-engine" Oct 11 10:56:22.041322 master-2 kubenswrapper[4776]: I1011 10:56:22.041110 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerName="heat-engine" Oct 11 10:56:22.041322 master-2 kubenswrapper[4776]: E1011 10:56:22.041135 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6ef0ad-fb25-4af2-a9fc-c89be4b1983b" containerName="mariadb-account-create" Oct 11 10:56:22.041322 master-2 kubenswrapper[4776]: I1011 10:56:22.041181 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6ef0ad-fb25-4af2-a9fc-c89be4b1983b" containerName="mariadb-account-create" Oct 11 10:56:22.041500 master-2 kubenswrapper[4776]: I1011 10:56:22.041408 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8c4de6-1a61-4bea-9e87-0157bc5eeb49" containerName="heat-engine" Oct 11 10:56:22.041500 master-2 kubenswrapper[4776]: I1011 10:56:22.041440 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-api" Oct 11 10:56:22.041500 master-2 kubenswrapper[4776]: I1011 10:56:22.041458 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" containerName="neutron-httpd" Oct 11 10:56:22.041500 master-2 kubenswrapper[4776]: I1011 10:56:22.041474 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6ef0ad-fb25-4af2-a9fc-c89be4b1983b" containerName="mariadb-account-create" Oct 11 10:56:22.041500 master-2 kubenswrapper[4776]: I1011 10:56:22.041485 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="70447ad9-31f0-4f6a-8c40-19fbe8141ada" containerName="dnsmasq-dns" Oct 11 10:56:22.044281 master-2 kubenswrapper[4776]: I1011 10:56:22.044240 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.057123 master-2 kubenswrapper[4776]: I1011 10:56:22.057064 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pr8j"] Oct 11 10:56:22.134249 master-2 kubenswrapper[4776]: I1011 10:56:22.134199 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-utilities\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.134468 master-2 kubenswrapper[4776]: I1011 10:56:22.134277 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98qxz\" (UniqueName: \"kubernetes.io/projected/5baa2228-1c52-469a-abb5-483e30443701-kube-api-access-98qxz\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.134468 master-2 kubenswrapper[4776]: I1011 10:56:22.134370 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-catalog-content\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.235741 master-2 kubenswrapper[4776]: I1011 10:56:22.235597 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-catalog-content\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.235741 master-2 kubenswrapper[4776]: I1011 10:56:22.235741 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-utilities\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.235946 master-2 kubenswrapper[4776]: I1011 10:56:22.235764 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98qxz\" (UniqueName: \"kubernetes.io/projected/5baa2228-1c52-469a-abb5-483e30443701-kube-api-access-98qxz\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.236519 master-2 kubenswrapper[4776]: I1011 10:56:22.236434 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-catalog-content\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.236628 master-2 kubenswrapper[4776]: I1011 10:56:22.236574 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-utilities\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.260834 master-2 kubenswrapper[4776]: I1011 10:56:22.260790 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98qxz\" (UniqueName: \"kubernetes.io/projected/5baa2228-1c52-469a-abb5-483e30443701-kube-api-access-98qxz\") pod \"certified-operators-9pr8j\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.338198 master-2 kubenswrapper[4776]: I1011 10:56:22.338145 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887b79bcd-4lcts" event={"ID":"60f1a3e8-20d2-48e9-842c-9312ce07efe0","Type":"ContainerDied","Data":"c3c06a80f0b059f9f2526f170c4bc91b415604fc699cd4abb9acf3dd95970a4b"} Oct 11 10:56:22.338198 master-2 kubenswrapper[4776]: I1011 10:56:22.338203 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887b79bcd-4lcts" Oct 11 10:56:22.339321 master-2 kubenswrapper[4776]: I1011 10:56:22.338221 4776 scope.go:117] "RemoveContainer" containerID="2c24e9bb8cd753d61f312b08264059f0ef167df16dddaf2f79133f8b02212dea" Oct 11 10:56:22.358348 master-2 kubenswrapper[4776]: I1011 10:56:22.358315 4776 scope.go:117] "RemoveContainer" containerID="a8ba91b5b068d25618fa0c0a4315f75f1ab1929925bf68853e336e2a934b0ca6" Oct 11 10:56:22.377144 master-2 kubenswrapper[4776]: I1011 10:56:22.377032 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7887b79bcd-4lcts"] Oct 11 10:56:22.381236 master-2 kubenswrapper[4776]: I1011 10:56:22.381198 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:22.383961 master-2 kubenswrapper[4776]: I1011 10:56:22.383918 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7887b79bcd-4lcts"] Oct 11 10:56:22.868953 master-2 kubenswrapper[4776]: I1011 10:56:22.868909 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pr8j"] Oct 11 10:56:23.262362 master-2 kubenswrapper[4776]: I1011 10:56:23.262307 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-b5802-default-internal-api-0" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.160:9292/healthcheck\": read tcp 10.128.0.2:56058->10.128.0.160:9292: read: connection reset by peer" Oct 11 10:56:23.262584 master-2 kubenswrapper[4776]: I1011 10:56:23.262339 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-b5802-default-internal-api-0" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.160:9292/healthcheck\": read tcp 10.128.0.2:56070->10.128.0.160:9292: read: connection reset by peer" Oct 11 10:56:23.352129 master-2 kubenswrapper[4776]: I1011 10:56:23.352061 4776 generic.go:334] "Generic (PLEG): container finished" podID="7353cefe-e495-4633-9472-93497ca94612" containerID="3b4c4342002f1ca53ae36a5961557e4addd090f4ce6b5d57db4005435cbb0b8e" exitCode=0 Oct 11 10:56:23.352129 master-2 kubenswrapper[4776]: I1011 10:56:23.352138 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"7353cefe-e495-4633-9472-93497ca94612","Type":"ContainerDied","Data":"3b4c4342002f1ca53ae36a5961557e4addd090f4ce6b5d57db4005435cbb0b8e"} Oct 11 10:56:23.355791 master-2 kubenswrapper[4776]: I1011 10:56:23.355744 4776 generic.go:334] "Generic (PLEG): container finished" podID="5baa2228-1c52-469a-abb5-483e30443701" containerID="3bc11fde04d1f8b52a9e917c401ad9aed9276fbde11670b40c9d984d8f15247c" exitCode=0 Oct 11 10:56:23.355791 master-2 kubenswrapper[4776]: I1011 10:56:23.355781 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pr8j" event={"ID":"5baa2228-1c52-469a-abb5-483e30443701","Type":"ContainerDied","Data":"3bc11fde04d1f8b52a9e917c401ad9aed9276fbde11670b40c9d984d8f15247c"} Oct 11 10:56:23.355990 master-2 kubenswrapper[4776]: I1011 10:56:23.355806 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pr8j" event={"ID":"5baa2228-1c52-469a-abb5-483e30443701","Type":"ContainerStarted","Data":"1cb5b9d336455c89439755003242ff1ad85dfa104d62ac4092e9fd018ff8e5cd"} Oct 11 10:56:24.067033 master-2 kubenswrapper[4776]: I1011 10:56:24.066982 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60f1a3e8-20d2-48e9-842c-9312ce07efe0" path="/var/lib/kubelet/pods/60f1a3e8-20d2-48e9-842c-9312ce07efe0/volumes" Oct 11 10:56:24.210755 master-2 kubenswrapper[4776]: I1011 10:56:24.210712 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.279797 master-2 kubenswrapper[4776]: I1011 10:56:24.279750 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-logs\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280000 master-2 kubenswrapper[4776]: I1011 10:56:24.279827 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-config-data\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280000 master-2 kubenswrapper[4776]: I1011 10:56:24.279849 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnqtp\" (UniqueName: \"kubernetes.io/projected/7353cefe-e495-4633-9472-93497ca94612-kube-api-access-nnqtp\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280000 master-2 kubenswrapper[4776]: I1011 10:56:24.279957 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-httpd-run\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280183 master-2 kubenswrapper[4776]: I1011 10:56:24.280031 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-internal-tls-certs\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280237 master-2 kubenswrapper[4776]: I1011 10:56:24.280189 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280366 master-2 kubenswrapper[4776]: I1011 10:56:24.280289 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-combined-ca-bundle\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.280439 master-2 kubenswrapper[4776]: I1011 10:56:24.280410 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-scripts\") pod \"7353cefe-e495-4633-9472-93497ca94612\" (UID: \"7353cefe-e495-4633-9472-93497ca94612\") " Oct 11 10:56:24.288474 master-2 kubenswrapper[4776]: I1011 10:56:24.288413 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-logs" (OuterVolumeSpecName: "logs") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:24.301741 master-2 kubenswrapper[4776]: I1011 10:56:24.300825 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-scripts" (OuterVolumeSpecName: "scripts") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:24.301741 master-2 kubenswrapper[4776]: I1011 10:56:24.301046 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7353cefe-e495-4633-9472-93497ca94612-kube-api-access-nnqtp" (OuterVolumeSpecName: "kube-api-access-nnqtp") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "kube-api-access-nnqtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:24.311910 master-2 kubenswrapper[4776]: I1011 10:56:24.311834 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:24.352805 master-2 kubenswrapper[4776]: I1011 10:56:24.344042 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574" (OuterVolumeSpecName: "glance") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 10:56:24.352805 master-2 kubenswrapper[4776]: I1011 10:56:24.346925 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:24.367448 master-2 kubenswrapper[4776]: I1011 10:56:24.367362 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:24.372762 master-2 kubenswrapper[4776]: I1011 10:56:24.372553 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"7353cefe-e495-4633-9472-93497ca94612","Type":"ContainerDied","Data":"9a5837d1cb2c6c6bd2ee66332c6614d5f0898097af5aee94ed2b3ff54ca6ee42"} Oct 11 10:56:24.372762 master-2 kubenswrapper[4776]: I1011 10:56:24.372605 4776 scope.go:117] "RemoveContainer" containerID="3b4c4342002f1ca53ae36a5961557e4addd090f4ce6b5d57db4005435cbb0b8e" Oct 11 10:56:24.372762 master-2 kubenswrapper[4776]: I1011 10:56:24.372637 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.382231 master-2 kubenswrapper[4776]: I1011 10:56:24.382182 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.382231 master-2 kubenswrapper[4776]: I1011 10:56:24.382224 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnqtp\" (UniqueName: \"kubernetes.io/projected/7353cefe-e495-4633-9472-93497ca94612-kube-api-access-nnqtp\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.382417 master-2 kubenswrapper[4776]: I1011 10:56:24.382242 4776 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7353cefe-e495-4633-9472-93497ca94612-httpd-run\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.382417 master-2 kubenswrapper[4776]: I1011 10:56:24.382255 4776 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-internal-tls-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.382417 master-2 kubenswrapper[4776]: I1011 10:56:24.382289 4776 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") on node \"master-2\" " Oct 11 10:56:24.382417 master-2 kubenswrapper[4776]: I1011 10:56:24.382305 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.382417 master-2 kubenswrapper[4776]: I1011 10:56:24.382318 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.382417 master-2 kubenswrapper[4776]: I1011 10:56:24.382352 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-config-data" (OuterVolumeSpecName: "config-data") pod "7353cefe-e495-4633-9472-93497ca94612" (UID: "7353cefe-e495-4633-9472-93497ca94612"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:24.408135 master-2 kubenswrapper[4776]: I1011 10:56:24.408094 4776 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 11 10:56:24.408290 master-2 kubenswrapper[4776]: I1011 10:56:24.408266 4776 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c" (UniqueName: "kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574") on node "master-2" Oct 11 10:56:24.423953 master-2 kubenswrapper[4776]: I1011 10:56:24.423930 4776 scope.go:117] "RemoveContainer" containerID="d313f81d0a8278bef4e01893ade947879d98007ed6a8ec60b3e2299775595e63" Oct 11 10:56:24.485108 master-2 kubenswrapper[4776]: I1011 10:56:24.485037 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7353cefe-e495-4633-9472-93497ca94612-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.485108 master-2 kubenswrapper[4776]: I1011 10:56:24.485082 4776 reconciler_common.go:293] "Volume detached for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:24.733126 master-2 kubenswrapper[4776]: I1011 10:56:24.733041 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:56:24.741920 master-2 kubenswrapper[4776]: I1011 10:56:24.741867 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:56:24.768808 master-2 kubenswrapper[4776]: I1011 10:56:24.768756 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:56:24.769158 master-2 kubenswrapper[4776]: E1011 10:56:24.769097 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-log" Oct 11 10:56:24.769158 master-2 kubenswrapper[4776]: I1011 10:56:24.769110 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-log" Oct 11 10:56:24.769158 master-2 kubenswrapper[4776]: E1011 10:56:24.769124 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-httpd" Oct 11 10:56:24.769158 master-2 kubenswrapper[4776]: I1011 10:56:24.769132 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-httpd" Oct 11 10:56:24.769559 master-2 kubenswrapper[4776]: I1011 10:56:24.769526 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-httpd" Oct 11 10:56:24.769559 master-2 kubenswrapper[4776]: I1011 10:56:24.769561 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7353cefe-e495-4633-9472-93497ca94612" containerName="glance-log" Oct 11 10:56:24.770733 master-2 kubenswrapper[4776]: I1011 10:56:24.770692 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.774418 master-2 kubenswrapper[4776]: I1011 10:56:24.773753 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-b5802-default-internal-config-data" Oct 11 10:56:24.774418 master-2 kubenswrapper[4776]: I1011 10:56:24.774392 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 11 10:56:24.801580 master-2 kubenswrapper[4776]: I1011 10:56:24.801520 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:56:24.893369 master-2 kubenswrapper[4776]: I1011 10:56:24.893247 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc645cf2-0900-4b7d-8001-91098664c4cd-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893369 master-2 kubenswrapper[4776]: I1011 10:56:24.893326 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893369 master-2 kubenswrapper[4776]: I1011 10:56:24.893372 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893617 master-2 kubenswrapper[4776]: I1011 10:56:24.893476 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893617 master-2 kubenswrapper[4776]: I1011 10:56:24.893529 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893617 master-2 kubenswrapper[4776]: I1011 10:56:24.893552 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc645cf2-0900-4b7d-8001-91098664c4cd-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893718 master-2 kubenswrapper[4776]: I1011 10:56:24.893643 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.893791 master-2 kubenswrapper[4776]: I1011 10:56:24.893747 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4twqn\" (UniqueName: \"kubernetes.io/projected/bc645cf2-0900-4b7d-8001-91098664c4cd-kube-api-access-4twqn\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.995809 master-2 kubenswrapper[4776]: I1011 10:56:24.995764 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc645cf2-0900-4b7d-8001-91098664c4cd-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996074 master-2 kubenswrapper[4776]: I1011 10:56:24.996058 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996175 master-2 kubenswrapper[4776]: I1011 10:56:24.996163 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996308 master-2 kubenswrapper[4776]: I1011 10:56:24.996294 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996418 master-2 kubenswrapper[4776]: I1011 10:56:24.996406 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996496 master-2 kubenswrapper[4776]: I1011 10:56:24.996484 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc645cf2-0900-4b7d-8001-91098664c4cd-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996568 master-2 kubenswrapper[4776]: I1011 10:56:24.996308 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc645cf2-0900-4b7d-8001-91098664c4cd-logs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996637 master-2 kubenswrapper[4776]: I1011 10:56:24.996623 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.996738 master-2 kubenswrapper[4776]: I1011 10:56:24.996722 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4twqn\" (UniqueName: \"kubernetes.io/projected/bc645cf2-0900-4b7d-8001-91098664c4cd-kube-api-access-4twqn\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.997202 master-2 kubenswrapper[4776]: I1011 10:56:24.997165 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc645cf2-0900-4b7d-8001-91098664c4cd-httpd-run\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:24.998151 master-2 kubenswrapper[4776]: I1011 10:56:24.998135 4776 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 10:56:24.998247 master-2 kubenswrapper[4776]: I1011 10:56:24.998231 4776 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c5f302d2b867cc737f2daf9c42090b10daaee38f14f31a51f3dbff0cf77a4fd1/globalmount\"" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:25.000287 master-2 kubenswrapper[4776]: I1011 10:56:25.000244 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-internal-tls-certs\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:25.000371 master-2 kubenswrapper[4776]: I1011 10:56:25.000247 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-scripts\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:25.000371 master-2 kubenswrapper[4776]: I1011 10:56:25.000256 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-combined-ca-bundle\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:25.000863 master-2 kubenswrapper[4776]: I1011 10:56:25.000814 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc645cf2-0900-4b7d-8001-91098664c4cd-config-data\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:25.027096 master-2 kubenswrapper[4776]: I1011 10:56:25.027024 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4twqn\" (UniqueName: \"kubernetes.io/projected/bc645cf2-0900-4b7d-8001-91098664c4cd-kube-api-access-4twqn\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:25.383387 master-2 kubenswrapper[4776]: I1011 10:56:25.383292 4776 generic.go:334] "Generic (PLEG): container finished" podID="5baa2228-1c52-469a-abb5-483e30443701" containerID="fe971507b0681a5d07246d18c8d56528aa0bc3f57ad326820f2a1eadf06f2fcf" exitCode=0 Oct 11 10:56:25.383387 master-2 kubenswrapper[4776]: I1011 10:56:25.383375 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pr8j" event={"ID":"5baa2228-1c52-469a-abb5-483e30443701","Type":"ContainerDied","Data":"fe971507b0681a5d07246d18c8d56528aa0bc3f57ad326820f2a1eadf06f2fcf"} Oct 11 10:56:26.071740 master-2 kubenswrapper[4776]: I1011 10:56:26.071622 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7353cefe-e495-4633-9472-93497ca94612" path="/var/lib/kubelet/pods/7353cefe-e495-4633-9472-93497ca94612/volumes" Oct 11 10:56:26.268881 master-2 kubenswrapper[4776]: I1011 10:56:26.268829 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ab508d7-cf86-49f3-b7d8-fd4599d9f12c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b317c590-b067-4e80-b0da-aa6791fb9574\") pod \"glance-b5802-default-internal-api-0\" (UID: \"bc645cf2-0900-4b7d-8001-91098664c4cd\") " pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:26.285905 master-2 kubenswrapper[4776]: I1011 10:56:26.285668 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:26.424925 master-2 kubenswrapper[4776]: I1011 10:56:26.424887 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pr8j" event={"ID":"5baa2228-1c52-469a-abb5-483e30443701","Type":"ContainerStarted","Data":"efedac54a12d10418e1659b5155affd85b591b2264aa7f46fcb0434f06469265"} Oct 11 10:56:26.479325 master-2 kubenswrapper[4776]: I1011 10:56:26.479256 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9pr8j" podStartSLOduration=2.061721685 podStartE2EDuration="4.479239232s" podCreationTimestamp="2025-10-11 10:56:22 +0000 UTC" firstStartedPulling="2025-10-11 10:56:23.358018096 +0000 UTC m=+1818.142444805" lastFinishedPulling="2025-10-11 10:56:25.775535643 +0000 UTC m=+1820.559962352" observedRunningTime="2025-10-11 10:56:26.466623851 +0000 UTC m=+1821.251050570" watchObservedRunningTime="2025-10-11 10:56:26.479239232 +0000 UTC m=+1821.263665941" Oct 11 10:56:26.948728 master-2 kubenswrapper[4776]: I1011 10:56:26.948652 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b5802-default-internal-api-0"] Oct 11 10:56:26.949878 master-2 kubenswrapper[4776]: W1011 10:56:26.949818 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc645cf2_0900_4b7d_8001_91098664c4cd.slice/crio-355fff6ba7d3433f645568d1791e6bb10ce1bc665a8595381fd3681800cf9c69 WatchSource:0}: Error finding container 355fff6ba7d3433f645568d1791e6bb10ce1bc665a8595381fd3681800cf9c69: Status 404 returned error can't find the container with id 355fff6ba7d3433f645568d1791e6bb10ce1bc665a8595381fd3681800cf9c69 Oct 11 10:56:27.204476 master-2 kubenswrapper[4776]: I1011 10:56:27.204223 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:56:27.205645 master-2 kubenswrapper[4776]: I1011 10:56:27.205566 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:56:27.209181 master-2 kubenswrapper[4776]: I1011 10:56:27.209132 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 10:56:27.229245 master-2 kubenswrapper[4776]: I1011 10:56:27.229162 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:27.258756 master-2 kubenswrapper[4776]: I1011 10:56:27.254332 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:56:27.258756 master-2 kubenswrapper[4776]: I1011 10:56:27.254383 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:27.258756 master-2 kubenswrapper[4776]: I1011 10:56:27.254492 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:56:27.258756 master-2 kubenswrapper[4776]: I1011 10:56:27.257950 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:56:27.309996 master-2 kubenswrapper[4776]: I1011 10:56:27.309829 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzwk6\" (UniqueName: \"kubernetes.io/projected/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-kube-api-access-hzwk6\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.309996 master-2 kubenswrapper[4776]: I1011 10:56:27.309926 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.309996 master-2 kubenswrapper[4776]: I1011 10:56:27.309980 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-config-data\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.310568 master-2 kubenswrapper[4776]: I1011 10:56:27.310021 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-logs\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.310568 master-2 kubenswrapper[4776]: I1011 10:56:27.310090 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np4rj\" (UniqueName: \"kubernetes.io/projected/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-kube-api-access-np4rj\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.310568 master-2 kubenswrapper[4776]: I1011 10:56:27.310174 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-config-data\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.310568 master-2 kubenswrapper[4776]: I1011 10:56:27.310237 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.415518 master-2 kubenswrapper[4776]: I1011 10:56:27.415462 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np4rj\" (UniqueName: \"kubernetes.io/projected/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-kube-api-access-np4rj\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.415818 master-2 kubenswrapper[4776]: I1011 10:56:27.415660 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-config-data\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.415915 master-2 kubenswrapper[4776]: I1011 10:56:27.415826 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.415915 master-2 kubenswrapper[4776]: I1011 10:56:27.415876 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzwk6\" (UniqueName: \"kubernetes.io/projected/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-kube-api-access-hzwk6\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.416003 master-2 kubenswrapper[4776]: I1011 10:56:27.415922 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.416003 master-2 kubenswrapper[4776]: I1011 10:56:27.415956 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-config-data\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.419033 master-2 kubenswrapper[4776]: I1011 10:56:27.419000 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.419326 master-2 kubenswrapper[4776]: I1011 10:56:27.419291 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-logs\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.419504 master-2 kubenswrapper[4776]: I1011 10:56:27.419476 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-config-data\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.419813 master-2 kubenswrapper[4776]: I1011 10:56:27.419771 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-logs\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.420517 master-2 kubenswrapper[4776]: I1011 10:56:27.420497 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-config-data\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.421580 master-2 kubenswrapper[4776]: I1011 10:56:27.421539 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.438312 master-2 kubenswrapper[4776]: I1011 10:56:27.438273 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np4rj\" (UniqueName: \"kubernetes.io/projected/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-kube-api-access-np4rj\") pod \"nova-api-2\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " pod="openstack/nova-api-2" Oct 11 10:56:27.440646 master-2 kubenswrapper[4776]: I1011 10:56:27.440563 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzwk6\" (UniqueName: \"kubernetes.io/projected/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-kube-api-access-hzwk6\") pod \"nova-scheduler-0\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " pod="openstack/nova-scheduler-0" Oct 11 10:56:27.453814 master-2 kubenswrapper[4776]: I1011 10:56:27.452209 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"bc645cf2-0900-4b7d-8001-91098664c4cd","Type":"ContainerStarted","Data":"355fff6ba7d3433f645568d1791e6bb10ce1bc665a8595381fd3681800cf9c69"} Oct 11 10:56:27.539377 master-2 kubenswrapper[4776]: I1011 10:56:27.539335 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:56:27.585872 master-2 kubenswrapper[4776]: I1011 10:56:27.585834 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:56:27.598830 master-2 kubenswrapper[4776]: I1011 10:56:27.598771 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:56:27.600261 master-2 kubenswrapper[4776]: I1011 10:56:27.600217 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 10:56:27.603661 master-2 kubenswrapper[4776]: I1011 10:56:27.603631 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 10:56:27.622205 master-2 kubenswrapper[4776]: I1011 10:56:27.621662 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:56:27.622205 master-2 kubenswrapper[4776]: I1011 10:56:27.622067 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-config-data\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.622205 master-2 kubenswrapper[4776]: I1011 10:56:27.622118 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.622508 master-2 kubenswrapper[4776]: I1011 10:56:27.622332 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec11821-cdf5-45e1-a138-2b62dad57cc3-logs\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.622508 master-2 kubenswrapper[4776]: I1011 10:56:27.622425 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75nct\" (UniqueName: \"kubernetes.io/projected/2ec11821-cdf5-45e1-a138-2b62dad57cc3-kube-api-access-75nct\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.725021 master-2 kubenswrapper[4776]: I1011 10:56:27.724451 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-config-data\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.725021 master-2 kubenswrapper[4776]: I1011 10:56:27.724519 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.725021 master-2 kubenswrapper[4776]: I1011 10:56:27.724585 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec11821-cdf5-45e1-a138-2b62dad57cc3-logs\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.725021 master-2 kubenswrapper[4776]: I1011 10:56:27.724629 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75nct\" (UniqueName: \"kubernetes.io/projected/2ec11821-cdf5-45e1-a138-2b62dad57cc3-kube-api-access-75nct\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.727611 master-2 kubenswrapper[4776]: I1011 10:56:27.727574 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec11821-cdf5-45e1-a138-2b62dad57cc3-logs\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.729387 master-2 kubenswrapper[4776]: I1011 10:56:27.728937 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-config-data\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.738739 master-2 kubenswrapper[4776]: I1011 10:56:27.738376 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.758286 master-2 kubenswrapper[4776]: I1011 10:56:27.758241 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75nct\" (UniqueName: \"kubernetes.io/projected/2ec11821-cdf5-45e1-a138-2b62dad57cc3-kube-api-access-75nct\") pod \"nova-metadata-0\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " pod="openstack/nova-metadata-0" Oct 11 10:56:27.795259 master-2 kubenswrapper[4776]: I1011 10:56:27.793875 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 10:56:27.804775 master-2 kubenswrapper[4776]: I1011 10:56:27.804723 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-xmqbr"] Oct 11 10:56:27.806567 master-2 kubenswrapper[4776]: I1011 10:56:27.806428 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.822174 master-2 kubenswrapper[4776]: I1011 10:56:27.811817 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 10:56:27.822174 master-2 kubenswrapper[4776]: I1011 10:56:27.818864 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 10:56:27.822174 master-2 kubenswrapper[4776]: I1011 10:56:27.821799 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 10:56:27.822174 master-2 kubenswrapper[4776]: I1011 10:56:27.822368 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.824695 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.831037 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-swift-storage-0\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.831400 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcqbh\" (UniqueName: \"kubernetes.io/projected/43ef2dc9-3563-4188-8d91-2fc18c396a4a-kube-api-access-xcqbh\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.831487 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-nb\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.831511 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-config\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.831625 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-svc\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.838854 master-2 kubenswrapper[4776]: I1011 10:56:27.831724 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-sb\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.847072 master-2 kubenswrapper[4776]: I1011 10:56:27.847015 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-xmqbr"] Oct 11 10:56:27.934124 master-2 kubenswrapper[4776]: I1011 10:56:27.933291 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-swift-storage-0\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.934124 master-2 kubenswrapper[4776]: I1011 10:56:27.933408 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcqbh\" (UniqueName: \"kubernetes.io/projected/43ef2dc9-3563-4188-8d91-2fc18c396a4a-kube-api-access-xcqbh\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.934124 master-2 kubenswrapper[4776]: I1011 10:56:27.933441 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-config\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.934124 master-2 kubenswrapper[4776]: I1011 10:56:27.933465 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-nb\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.934124 master-2 kubenswrapper[4776]: I1011 10:56:27.933508 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-svc\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.934124 master-2 kubenswrapper[4776]: I1011 10:56:27.933543 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-sb\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.940564 master-2 kubenswrapper[4776]: I1011 10:56:27.938239 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-swift-storage-0\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.940564 master-2 kubenswrapper[4776]: I1011 10:56:27.939096 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-svc\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.940564 master-2 kubenswrapper[4776]: I1011 10:56:27.940162 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-config\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.941787 master-2 kubenswrapper[4776]: I1011 10:56:27.941744 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-sb\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.943589 master-2 kubenswrapper[4776]: I1011 10:56:27.943480 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-nb\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:27.968665 master-2 kubenswrapper[4776]: I1011 10:56:27.968152 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcqbh\" (UniqueName: \"kubernetes.io/projected/43ef2dc9-3563-4188-8d91-2fc18c396a4a-kube-api-access-xcqbh\") pod \"dnsmasq-dns-79cbf74f6f-xmqbr\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:28.147092 master-2 kubenswrapper[4776]: I1011 10:56:28.146385 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:28.165353 master-2 kubenswrapper[4776]: I1011 10:56:28.165121 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:28.169030 master-2 kubenswrapper[4776]: W1011 10:56:28.168838 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a3bc084_f5d9_4e64_9350_d2c3b3487e76.slice/crio-bba76d3eeea8b745ea58d66c158d3960e6d455c9d713ec1220d847e9aa5a076e WatchSource:0}: Error finding container bba76d3eeea8b745ea58d66c158d3960e6d455c9d713ec1220d847e9aa5a076e: Status 404 returned error can't find the container with id bba76d3eeea8b745ea58d66c158d3960e6d455c9d713ec1220d847e9aa5a076e Oct 11 10:56:28.465514 master-2 kubenswrapper[4776]: I1011 10:56:28.465391 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"bc645cf2-0900-4b7d-8001-91098664c4cd","Type":"ContainerStarted","Data":"9692711be9bc0b51cc13331b85e1b1ad12740fb7e2db869ee8c8329fe0153917"} Oct 11 10:56:28.488114 master-2 kubenswrapper[4776]: I1011 10:56:28.471841 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a3bc084-f5d9-4e64-9350-d2c3b3487e76","Type":"ContainerStarted","Data":"bba76d3eeea8b745ea58d66c158d3960e6d455c9d713ec1220d847e9aa5a076e"} Oct 11 10:56:28.567116 master-2 kubenswrapper[4776]: I1011 10:56:28.565058 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:56:28.573444 master-2 kubenswrapper[4776]: I1011 10:56:28.573397 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:56:28.818941 master-2 kubenswrapper[4776]: W1011 10:56:28.818813 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-b2468f50ae0c430498d74cf1413c34467e7f27dc91b6590e32b6437ebfd5f4be WatchSource:0}: Error finding container b2468f50ae0c430498d74cf1413c34467e7f27dc91b6590e32b6437ebfd5f4be: Status 404 returned error can't find the container with id b2468f50ae0c430498d74cf1413c34467e7f27dc91b6590e32b6437ebfd5f4be Oct 11 10:56:28.821816 master-2 kubenswrapper[4776]: I1011 10:56:28.821768 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-xmqbr"] Oct 11 10:56:29.488381 master-2 kubenswrapper[4776]: I1011 10:56:29.488208 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b","Type":"ContainerStarted","Data":"443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c"} Oct 11 10:56:29.498730 master-2 kubenswrapper[4776]: I1011 10:56:29.498636 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ec11821-cdf5-45e1-a138-2b62dad57cc3","Type":"ContainerStarted","Data":"9b37b95095e4c99e4c588f15858480444284fe16e29197075e74b845d5fdd23b"} Oct 11 10:56:29.501775 master-2 kubenswrapper[4776]: I1011 10:56:29.501354 4776 generic.go:334] "Generic (PLEG): container finished" podID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerID="c248f341904d5a6b165429ebe51185bc10d8bbf637b7b5baa2593c4ecc482b79" exitCode=0 Oct 11 10:56:29.501775 master-2 kubenswrapper[4776]: I1011 10:56:29.501408 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" event={"ID":"43ef2dc9-3563-4188-8d91-2fc18c396a4a","Type":"ContainerDied","Data":"c248f341904d5a6b165429ebe51185bc10d8bbf637b7b5baa2593c4ecc482b79"} Oct 11 10:56:29.501775 master-2 kubenswrapper[4776]: I1011 10:56:29.501426 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" event={"ID":"43ef2dc9-3563-4188-8d91-2fc18c396a4a","Type":"ContainerStarted","Data":"b2468f50ae0c430498d74cf1413c34467e7f27dc91b6590e32b6437ebfd5f4be"} Oct 11 10:56:29.506042 master-2 kubenswrapper[4776]: I1011 10:56:29.505992 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b5802-default-internal-api-0" event={"ID":"bc645cf2-0900-4b7d-8001-91098664c4cd","Type":"ContainerStarted","Data":"591aab95f05801f0019f8a3fc7d7819e1d10c278972d44e1c7797799e5a31883"} Oct 11 10:56:29.649057 master-2 kubenswrapper[4776]: I1011 10:56:29.645612 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b5802-default-internal-api-0" podStartSLOduration=5.645595262 podStartE2EDuration="5.645595262s" podCreationTimestamp="2025-10-11 10:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:29.644733608 +0000 UTC m=+1824.429160317" watchObservedRunningTime="2025-10-11 10:56:29.645595262 +0000 UTC m=+1824.430021971" Oct 11 10:56:30.522122 master-2 kubenswrapper[4776]: I1011 10:56:30.521942 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" event={"ID":"43ef2dc9-3563-4188-8d91-2fc18c396a4a","Type":"ContainerStarted","Data":"a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142"} Oct 11 10:56:30.522122 master-2 kubenswrapper[4776]: I1011 10:56:30.522087 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:30.554619 master-2 kubenswrapper[4776]: I1011 10:56:30.554287 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" podStartSLOduration=3.554251672 podStartE2EDuration="3.554251672s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:30.549603795 +0000 UTC m=+1825.334030504" watchObservedRunningTime="2025-10-11 10:56:30.554251672 +0000 UTC m=+1825.338678381" Oct 11 10:56:32.382077 master-2 kubenswrapper[4776]: I1011 10:56:32.381970 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:32.388941 master-2 kubenswrapper[4776]: I1011 10:56:32.388866 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:32.436768 master-2 kubenswrapper[4776]: I1011 10:56:32.436708 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:32.597065 master-2 kubenswrapper[4776]: I1011 10:56:32.597008 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:32.697185 master-2 kubenswrapper[4776]: I1011 10:56:32.697134 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pr8j"] Oct 11 10:56:34.153075 master-2 kubenswrapper[4776]: I1011 10:56:34.153019 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:56:34.154124 master-2 kubenswrapper[4776]: I1011 10:56:34.154090 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" containerName="kube-state-metrics" containerID="cri-o://6f7f88ff1f83ffde680d343beac013ec48c1906c3184c197f7e12a2ad3e3ad7b" gracePeriod=30 Oct 11 10:56:34.561464 master-2 kubenswrapper[4776]: I1011 10:56:34.561409 4776 generic.go:334] "Generic (PLEG): container finished" podID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" containerID="6f7f88ff1f83ffde680d343beac013ec48c1906c3184c197f7e12a2ad3e3ad7b" exitCode=2 Oct 11 10:56:34.561812 master-2 kubenswrapper[4776]: I1011 10:56:34.561788 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9pr8j" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="registry-server" containerID="cri-o://efedac54a12d10418e1659b5155affd85b591b2264aa7f46fcb0434f06469265" gracePeriod=2 Oct 11 10:56:34.562700 master-2 kubenswrapper[4776]: I1011 10:56:34.562276 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbfedacb-2045-4297-be8f-3582dd2bcd7b","Type":"ContainerDied","Data":"6f7f88ff1f83ffde680d343beac013ec48c1906c3184c197f7e12a2ad3e3ad7b"} Oct 11 10:56:35.575376 master-2 kubenswrapper[4776]: I1011 10:56:35.575292 4776 generic.go:334] "Generic (PLEG): container finished" podID="5baa2228-1c52-469a-abb5-483e30443701" containerID="efedac54a12d10418e1659b5155affd85b591b2264aa7f46fcb0434f06469265" exitCode=0 Oct 11 10:56:35.575376 master-2 kubenswrapper[4776]: I1011 10:56:35.575363 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pr8j" event={"ID":"5baa2228-1c52-469a-abb5-483e30443701","Type":"ContainerDied","Data":"efedac54a12d10418e1659b5155affd85b591b2264aa7f46fcb0434f06469265"} Oct 11 10:56:36.286653 master-2 kubenswrapper[4776]: I1011 10:56:36.286524 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:36.286653 master-2 kubenswrapper[4776]: I1011 10:56:36.286597 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:36.326720 master-2 kubenswrapper[4776]: I1011 10:56:36.318629 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:36.336854 master-2 kubenswrapper[4776]: I1011 10:56:36.335760 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:36.585800 master-2 kubenswrapper[4776]: I1011 10:56:36.585395 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:36.585800 master-2 kubenswrapper[4776]: I1011 10:56:36.585440 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:37.498885 master-2 kubenswrapper[4776]: I1011 10:56:37.498848 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:37.610487 master-2 kubenswrapper[4776]: I1011 10:56:37.610418 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-catalog-content\") pod \"5baa2228-1c52-469a-abb5-483e30443701\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " Oct 11 10:56:37.611217 master-2 kubenswrapper[4776]: I1011 10:56:37.610523 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98qxz\" (UniqueName: \"kubernetes.io/projected/5baa2228-1c52-469a-abb5-483e30443701-kube-api-access-98qxz\") pod \"5baa2228-1c52-469a-abb5-483e30443701\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " Oct 11 10:56:37.611217 master-2 kubenswrapper[4776]: I1011 10:56:37.610609 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-utilities\") pod \"5baa2228-1c52-469a-abb5-483e30443701\" (UID: \"5baa2228-1c52-469a-abb5-483e30443701\") " Oct 11 10:56:37.613251 master-2 kubenswrapper[4776]: I1011 10:56:37.613156 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-utilities" (OuterVolumeSpecName: "utilities") pod "5baa2228-1c52-469a-abb5-483e30443701" (UID: "5baa2228-1c52-469a-abb5-483e30443701"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:37.615426 master-2 kubenswrapper[4776]: I1011 10:56:37.615316 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5baa2228-1c52-469a-abb5-483e30443701-kube-api-access-98qxz" (OuterVolumeSpecName: "kube-api-access-98qxz") pod "5baa2228-1c52-469a-abb5-483e30443701" (UID: "5baa2228-1c52-469a-abb5-483e30443701"). InnerVolumeSpecName "kube-api-access-98qxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:37.624842 master-2 kubenswrapper[4776]: I1011 10:56:37.624425 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pr8j" event={"ID":"5baa2228-1c52-469a-abb5-483e30443701","Type":"ContainerDied","Data":"1cb5b9d336455c89439755003242ff1ad85dfa104d62ac4092e9fd018ff8e5cd"} Oct 11 10:56:37.624842 master-2 kubenswrapper[4776]: I1011 10:56:37.624513 4776 scope.go:117] "RemoveContainer" containerID="efedac54a12d10418e1659b5155affd85b591b2264aa7f46fcb0434f06469265" Oct 11 10:56:37.624842 master-2 kubenswrapper[4776]: I1011 10:56:37.624334 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pr8j" Oct 11 10:56:37.695165 master-2 kubenswrapper[4776]: I1011 10:56:37.692369 4776 scope.go:117] "RemoveContainer" containerID="fe971507b0681a5d07246d18c8d56528aa0bc3f57ad326820f2a1eadf06f2fcf" Oct 11 10:56:37.695165 master-2 kubenswrapper[4776]: I1011 10:56:37.692947 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5baa2228-1c52-469a-abb5-483e30443701" (UID: "5baa2228-1c52-469a-abb5-483e30443701"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:37.719731 master-2 kubenswrapper[4776]: I1011 10:56:37.715366 4776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-catalog-content\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:37.719731 master-2 kubenswrapper[4776]: I1011 10:56:37.715411 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98qxz\" (UniqueName: \"kubernetes.io/projected/5baa2228-1c52-469a-abb5-483e30443701-kube-api-access-98qxz\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:37.719731 master-2 kubenswrapper[4776]: I1011 10:56:37.715427 4776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa2228-1c52-469a-abb5-483e30443701-utilities\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:37.723391 master-2 kubenswrapper[4776]: I1011 10:56:37.723340 4776 scope.go:117] "RemoveContainer" containerID="3bc11fde04d1f8b52a9e917c401ad9aed9276fbde11670b40c9d984d8f15247c" Oct 11 10:56:37.897129 master-2 kubenswrapper[4776]: I1011 10:56:37.897093 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.005011 master-2 kubenswrapper[4776]: I1011 10:56:38.004640 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pr8j"] Oct 11 10:56:38.018598 master-2 kubenswrapper[4776]: I1011 10:56:38.018546 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9pr8j"] Oct 11 10:56:38.020716 master-2 kubenswrapper[4776]: I1011 10:56:38.019917 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm52p\" (UniqueName: \"kubernetes.io/projected/cbfedacb-2045-4297-be8f-3582dd2bcd7b-kube-api-access-cm52p\") pod \"cbfedacb-2045-4297-be8f-3582dd2bcd7b\" (UID: \"cbfedacb-2045-4297-be8f-3582dd2bcd7b\") " Oct 11 10:56:38.026906 master-2 kubenswrapper[4776]: I1011 10:56:38.026855 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbfedacb-2045-4297-be8f-3582dd2bcd7b-kube-api-access-cm52p" (OuterVolumeSpecName: "kube-api-access-cm52p") pod "cbfedacb-2045-4297-be8f-3582dd2bcd7b" (UID: "cbfedacb-2045-4297-be8f-3582dd2bcd7b"). InnerVolumeSpecName "kube-api-access-cm52p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:38.093641 master-2 kubenswrapper[4776]: I1011 10:56:38.093555 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5baa2228-1c52-469a-abb5-483e30443701" path="/var/lib/kubelet/pods/5baa2228-1c52-469a-abb5-483e30443701/volumes" Oct 11 10:56:38.122208 master-2 kubenswrapper[4776]: I1011 10:56:38.122158 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm52p\" (UniqueName: \"kubernetes.io/projected/cbfedacb-2045-4297-be8f-3582dd2bcd7b-kube-api-access-cm52p\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:38.148918 master-2 kubenswrapper[4776]: I1011 10:56:38.148862 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:56:38.648400 master-2 kubenswrapper[4776]: I1011 10:56:38.648324 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a3bc084-f5d9-4e64-9350-d2c3b3487e76","Type":"ContainerStarted","Data":"bb1eabd915662801894dc6613fb5062fd75144d8e9f8576573aae186cb323f10"} Oct 11 10:56:38.648916 master-2 kubenswrapper[4776]: I1011 10:56:38.648408 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a3bc084-f5d9-4e64-9350-d2c3b3487e76","Type":"ContainerStarted","Data":"5927eac8ba45ef4c7a0dc1c214bcc490374b8d265e2fa04fadc00756760072a6"} Oct 11 10:56:38.653019 master-2 kubenswrapper[4776]: I1011 10:56:38.652974 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b","Type":"ContainerStarted","Data":"c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554"} Oct 11 10:56:38.654937 master-2 kubenswrapper[4776]: I1011 10:56:38.654907 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ec11821-cdf5-45e1-a138-2b62dad57cc3","Type":"ContainerStarted","Data":"a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77"} Oct 11 10:56:38.654937 master-2 kubenswrapper[4776]: I1011 10:56:38.654944 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ec11821-cdf5-45e1-a138-2b62dad57cc3","Type":"ContainerStarted","Data":"a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12"} Oct 11 10:56:38.658269 master-2 kubenswrapper[4776]: I1011 10:56:38.658230 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbfedacb-2045-4297-be8f-3582dd2bcd7b","Type":"ContainerDied","Data":"b03215961b9c5150bde0b4e7e2b48a359893ef2938e93f7fad3388b4aeef63a0"} Oct 11 10:56:38.658427 master-2 kubenswrapper[4776]: I1011 10:56:38.658244 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.658427 master-2 kubenswrapper[4776]: I1011 10:56:38.658278 4776 scope.go:117] "RemoveContainer" containerID="6f7f88ff1f83ffde680d343beac013ec48c1906c3184c197f7e12a2ad3e3ad7b" Oct 11 10:56:38.658832 master-2 kubenswrapper[4776]: I1011 10:56:38.658808 4776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:56:38.658970 master-2 kubenswrapper[4776]: I1011 10:56:38.658958 4776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 10:56:38.713024 master-2 kubenswrapper[4776]: I1011 10:56:38.712304 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2" podStartSLOduration=2.400639707 podStartE2EDuration="11.712282597s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.173377047 +0000 UTC m=+1822.957803756" lastFinishedPulling="2025-10-11 10:56:37.485019937 +0000 UTC m=+1832.269446646" observedRunningTime="2025-10-11 10:56:38.673486006 +0000 UTC m=+1833.457912715" watchObservedRunningTime="2025-10-11 10:56:38.712282597 +0000 UTC m=+1833.496709306" Oct 11 10:56:38.718714 master-2 kubenswrapper[4776]: I1011 10:56:38.718554 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:56:38.738333 master-2 kubenswrapper[4776]: I1011 10:56:38.733537 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:56:38.746849 master-2 kubenswrapper[4776]: I1011 10:56:38.746788 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:56:38.747439 master-2 kubenswrapper[4776]: E1011 10:56:38.747381 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="extract-content" Oct 11 10:56:38.747439 master-2 kubenswrapper[4776]: I1011 10:56:38.747404 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="extract-content" Oct 11 10:56:38.747439 master-2 kubenswrapper[4776]: E1011 10:56:38.747420 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="registry-server" Oct 11 10:56:38.747439 master-2 kubenswrapper[4776]: I1011 10:56:38.747427 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="registry-server" Oct 11 10:56:38.747439 master-2 kubenswrapper[4776]: E1011 10:56:38.747443 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="extract-utilities" Oct 11 10:56:38.747739 master-2 kubenswrapper[4776]: I1011 10:56:38.747450 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="extract-utilities" Oct 11 10:56:38.747739 master-2 kubenswrapper[4776]: E1011 10:56:38.747468 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" containerName="kube-state-metrics" Oct 11 10:56:38.747739 master-2 kubenswrapper[4776]: I1011 10:56:38.747474 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" containerName="kube-state-metrics" Oct 11 10:56:38.747739 master-2 kubenswrapper[4776]: I1011 10:56:38.747625 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="5baa2228-1c52-469a-abb5-483e30443701" containerName="registry-server" Oct 11 10:56:38.747739 master-2 kubenswrapper[4776]: I1011 10:56:38.747644 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" containerName="kube-state-metrics" Oct 11 10:56:38.752741 master-2 kubenswrapper[4776]: I1011 10:56:38.748384 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.835271188 podStartE2EDuration="11.748363054s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.604626557 +0000 UTC m=+1823.389053266" lastFinishedPulling="2025-10-11 10:56:37.517718423 +0000 UTC m=+1832.302145132" observedRunningTime="2025-10-11 10:56:38.742371802 +0000 UTC m=+1833.526798511" watchObservedRunningTime="2025-10-11 10:56:38.748363054 +0000 UTC m=+1833.532789763" Oct 11 10:56:38.752741 master-2 kubenswrapper[4776]: I1011 10:56:38.748467 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.752741 master-2 kubenswrapper[4776]: I1011 10:56:38.751343 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 11 10:56:38.752741 master-2 kubenswrapper[4776]: I1011 10:56:38.751558 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 11 10:56:38.787241 master-2 kubenswrapper[4776]: I1011 10:56:38.766799 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:56:38.787241 master-2 kubenswrapper[4776]: I1011 10:56:38.770506 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.856007601 podStartE2EDuration="11.770493244s" podCreationTimestamp="2025-10-11 10:56:27 +0000 UTC" firstStartedPulling="2025-10-11 10:56:28.603830806 +0000 UTC m=+1823.388257515" lastFinishedPulling="2025-10-11 10:56:37.518316449 +0000 UTC m=+1832.302743158" observedRunningTime="2025-10-11 10:56:38.767148363 +0000 UTC m=+1833.551575072" watchObservedRunningTime="2025-10-11 10:56:38.770493244 +0000 UTC m=+1833.554919953" Oct 11 10:56:38.791770 master-2 kubenswrapper[4776]: I1011 10:56:38.788751 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:38.806452 master-2 kubenswrapper[4776]: I1011 10:56:38.806400 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-b5802-default-internal-api-0" Oct 11 10:56:38.848335 master-2 kubenswrapper[4776]: I1011 10:56:38.844063 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.848335 master-2 kubenswrapper[4776]: I1011 10:56:38.844313 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.848335 master-2 kubenswrapper[4776]: I1011 10:56:38.844361 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.848335 master-2 kubenswrapper[4776]: I1011 10:56:38.844401 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74445\" (UniqueName: \"kubernetes.io/projected/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-api-access-74445\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.966271 master-2 kubenswrapper[4776]: I1011 10:56:38.966200 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.966271 master-2 kubenswrapper[4776]: I1011 10:56:38.966257 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.966547 master-2 kubenswrapper[4776]: I1011 10:56:38.966285 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74445\" (UniqueName: \"kubernetes.io/projected/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-api-access-74445\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.966547 master-2 kubenswrapper[4776]: I1011 10:56:38.966326 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.970692 master-2 kubenswrapper[4776]: I1011 10:56:38.969864 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.970692 master-2 kubenswrapper[4776]: I1011 10:56:38.970209 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.970692 master-2 kubenswrapper[4776]: I1011 10:56:38.970230 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e540cea-23f3-44cc-8c37-178e530eb1f1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:38.995628 master-2 kubenswrapper[4776]: I1011 10:56:38.992903 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74445\" (UniqueName: \"kubernetes.io/projected/3e540cea-23f3-44cc-8c37-178e530eb1f1-kube-api-access-74445\") pod \"kube-state-metrics-0\" (UID: \"3e540cea-23f3-44cc-8c37-178e530eb1f1\") " pod="openstack/kube-state-metrics-0" Oct 11 10:56:39.075466 master-2 kubenswrapper[4776]: I1011 10:56:39.075398 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 10:56:39.563916 master-2 kubenswrapper[4776]: I1011 10:56:39.563594 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 10:56:39.678757 master-2 kubenswrapper[4776]: I1011 10:56:39.678688 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e540cea-23f3-44cc-8c37-178e530eb1f1","Type":"ContainerStarted","Data":"4968e57c5e03e403068a64e2f89f207886b2485d5f203a20fa06002df46a1d63"} Oct 11 10:56:40.068346 master-2 kubenswrapper[4776]: I1011 10:56:40.068291 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" path="/var/lib/kubelet/pods/cbfedacb-2045-4297-be8f-3582dd2bcd7b/volumes" Oct 11 10:56:40.689907 master-2 kubenswrapper[4776]: I1011 10:56:40.689861 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e540cea-23f3-44cc-8c37-178e530eb1f1","Type":"ContainerStarted","Data":"ac83364aaed2cb47eea651c765afd60741b473edf1ad8a1ac60b6ff303197735"} Oct 11 10:56:40.768489 master-2 kubenswrapper[4776]: I1011 10:56:40.768416 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.406022711 podStartE2EDuration="2.768397806s" podCreationTimestamp="2025-10-11 10:56:38 +0000 UTC" firstStartedPulling="2025-10-11 10:56:39.567005917 +0000 UTC m=+1834.351432626" lastFinishedPulling="2025-10-11 10:56:39.929381012 +0000 UTC m=+1834.713807721" observedRunningTime="2025-10-11 10:56:40.762094815 +0000 UTC m=+1835.546521534" watchObservedRunningTime="2025-10-11 10:56:40.768397806 +0000 UTC m=+1835.552824505" Oct 11 10:56:41.696808 master-2 kubenswrapper[4776]: I1011 10:56:41.696723 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 11 10:56:42.461752 master-2 kubenswrapper[4776]: I1011 10:56:42.461683 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="cbfedacb-2045-4297-be8f-3582dd2bcd7b" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.128.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:42.540348 master-2 kubenswrapper[4776]: I1011 10:56:42.540279 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 11 10:56:42.795553 master-2 kubenswrapper[4776]: I1011 10:56:42.795390 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 10:56:42.796263 master-2 kubenswrapper[4776]: I1011 10:56:42.796245 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 10:56:45.308259 master-2 kubenswrapper[4776]: I1011 10:56:45.308044 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:45.309039 master-2 kubenswrapper[4776]: I1011 10:56:45.308340 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-log" containerID="cri-o://5927eac8ba45ef4c7a0dc1c214bcc490374b8d265e2fa04fadc00756760072a6" gracePeriod=30 Oct 11 10:56:45.309215 master-2 kubenswrapper[4776]: I1011 10:56:45.309067 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-api" containerID="cri-o://bb1eabd915662801894dc6613fb5062fd75144d8e9f8576573aae186cb323f10" gracePeriod=30 Oct 11 10:56:45.751115 master-2 kubenswrapper[4776]: I1011 10:56:45.751041 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerID="bb1eabd915662801894dc6613fb5062fd75144d8e9f8576573aae186cb323f10" exitCode=0 Oct 11 10:56:45.751115 master-2 kubenswrapper[4776]: I1011 10:56:45.751085 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerID="5927eac8ba45ef4c7a0dc1c214bcc490374b8d265e2fa04fadc00756760072a6" exitCode=143 Oct 11 10:56:45.751115 master-2 kubenswrapper[4776]: I1011 10:56:45.751109 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a3bc084-f5d9-4e64-9350-d2c3b3487e76","Type":"ContainerDied","Data":"bb1eabd915662801894dc6613fb5062fd75144d8e9f8576573aae186cb323f10"} Oct 11 10:56:45.751896 master-2 kubenswrapper[4776]: I1011 10:56:45.751149 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a3bc084-f5d9-4e64-9350-d2c3b3487e76","Type":"ContainerDied","Data":"5927eac8ba45ef4c7a0dc1c214bcc490374b8d265e2fa04fadc00756760072a6"} Oct 11 10:56:46.236754 master-2 kubenswrapper[4776]: I1011 10:56:46.236719 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:56:46.416591 master-2 kubenswrapper[4776]: I1011 10:56:46.416436 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-logs\") pod \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " Oct 11 10:56:46.416591 master-2 kubenswrapper[4776]: I1011 10:56:46.416538 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-config-data\") pod \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " Oct 11 10:56:46.417253 master-2 kubenswrapper[4776]: I1011 10:56:46.416822 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-logs" (OuterVolumeSpecName: "logs") pod "3a3bc084-f5d9-4e64-9350-d2c3b3487e76" (UID: "3a3bc084-f5d9-4e64-9350-d2c3b3487e76"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:56:46.417253 master-2 kubenswrapper[4776]: I1011 10:56:46.416850 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-combined-ca-bundle\") pod \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " Oct 11 10:56:46.417253 master-2 kubenswrapper[4776]: I1011 10:56:46.416968 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np4rj\" (UniqueName: \"kubernetes.io/projected/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-kube-api-access-np4rj\") pod \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\" (UID: \"3a3bc084-f5d9-4e64-9350-d2c3b3487e76\") " Oct 11 10:56:46.417534 master-2 kubenswrapper[4776]: I1011 10:56:46.417498 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:46.420140 master-2 kubenswrapper[4776]: I1011 10:56:46.420089 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-kube-api-access-np4rj" (OuterVolumeSpecName: "kube-api-access-np4rj") pod "3a3bc084-f5d9-4e64-9350-d2c3b3487e76" (UID: "3a3bc084-f5d9-4e64-9350-d2c3b3487e76"). InnerVolumeSpecName "kube-api-access-np4rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:56:46.437875 master-2 kubenswrapper[4776]: I1011 10:56:46.437812 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a3bc084-f5d9-4e64-9350-d2c3b3487e76" (UID: "3a3bc084-f5d9-4e64-9350-d2c3b3487e76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:46.438203 master-2 kubenswrapper[4776]: I1011 10:56:46.438136 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-config-data" (OuterVolumeSpecName: "config-data") pod "3a3bc084-f5d9-4e64-9350-d2c3b3487e76" (UID: "3a3bc084-f5d9-4e64-9350-d2c3b3487e76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:56:46.520255 master-2 kubenswrapper[4776]: I1011 10:56:46.520167 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:46.520255 master-2 kubenswrapper[4776]: I1011 10:56:46.520229 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np4rj\" (UniqueName: \"kubernetes.io/projected/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-kube-api-access-np4rj\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:46.520255 master-2 kubenswrapper[4776]: I1011 10:56:46.520251 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3bc084-f5d9-4e64-9350-d2c3b3487e76-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:56:46.761806 master-2 kubenswrapper[4776]: I1011 10:56:46.761663 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a3bc084-f5d9-4e64-9350-d2c3b3487e76","Type":"ContainerDied","Data":"bba76d3eeea8b745ea58d66c158d3960e6d455c9d713ec1220d847e9aa5a076e"} Oct 11 10:56:46.761806 master-2 kubenswrapper[4776]: I1011 10:56:46.761747 4776 scope.go:117] "RemoveContainer" containerID="bb1eabd915662801894dc6613fb5062fd75144d8e9f8576573aae186cb323f10" Oct 11 10:56:46.762022 master-2 kubenswrapper[4776]: I1011 10:56:46.761820 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:56:46.779068 master-2 kubenswrapper[4776]: I1011 10:56:46.778979 4776 scope.go:117] "RemoveContainer" containerID="5927eac8ba45ef4c7a0dc1c214bcc490374b8d265e2fa04fadc00756760072a6" Oct 11 10:56:47.540014 master-2 kubenswrapper[4776]: I1011 10:56:47.539948 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 11 10:56:47.564749 master-2 kubenswrapper[4776]: I1011 10:56:47.564668 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 11 10:56:47.794956 master-2 kubenswrapper[4776]: I1011 10:56:47.794826 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 11 10:56:47.794956 master-2 kubenswrapper[4776]: I1011 10:56:47.794887 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 11 10:56:47.810372 master-2 kubenswrapper[4776]: I1011 10:56:47.810329 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 11 10:56:47.991134 master-2 kubenswrapper[4776]: I1011 10:56:47.991086 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:48.597602 master-2 kubenswrapper[4776]: I1011 10:56:48.597417 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:48.786757 master-2 kubenswrapper[4776]: I1011 10:56:48.786668 4776 generic.go:334] "Generic (PLEG): container finished" podID="98ff7c8d-cc7c-4b25-917b-88dfa7f837c5" containerID="23f0e7b89983da20c11b93039bc87953bf1c5b41a82bfad5304a8f7dfd94bc3f" exitCode=0 Oct 11 10:56:48.786757 master-2 kubenswrapper[4776]: I1011 10:56:48.786753 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerDied","Data":"23f0e7b89983da20c11b93039bc87953bf1c5b41a82bfad5304a8f7dfd94bc3f"} Oct 11 10:56:48.837080 master-2 kubenswrapper[4776]: I1011 10:56:48.836958 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.128.0.172:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:48.878105 master-2 kubenswrapper[4776]: I1011 10:56:48.877880 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.128.0.172:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:56:49.035000 master-2 kubenswrapper[4776]: I1011 10:56:49.034935 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:49.035285 master-2 kubenswrapper[4776]: E1011 10:56:49.035263 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-log" Oct 11 10:56:49.035285 master-2 kubenswrapper[4776]: I1011 10:56:49.035280 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-log" Oct 11 10:56:49.035364 master-2 kubenswrapper[4776]: E1011 10:56:49.035300 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-api" Oct 11 10:56:49.035364 master-2 kubenswrapper[4776]: I1011 10:56:49.035306 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-api" Oct 11 10:56:49.035491 master-2 kubenswrapper[4776]: I1011 10:56:49.035471 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-api" Oct 11 10:56:49.035491 master-2 kubenswrapper[4776]: I1011 10:56:49.035487 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" containerName="nova-api-log" Oct 11 10:56:49.036408 master-2 kubenswrapper[4776]: I1011 10:56:49.036384 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:56:49.048424 master-2 kubenswrapper[4776]: I1011 10:56:49.047414 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:56:49.082806 master-2 kubenswrapper[4776]: I1011 10:56:49.082756 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 11 10:56:49.267189 master-2 kubenswrapper[4776]: I1011 10:56:49.267132 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:49.277468 master-2 kubenswrapper[4776]: I1011 10:56:49.277390 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 11 10:56:49.280710 master-2 kubenswrapper[4776]: I1011 10:56:49.280633 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 11 10:56:49.282963 master-2 kubenswrapper[4776]: I1011 10:56:49.282908 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 11 10:56:49.283262 master-2 kubenswrapper[4776]: I1011 10:56:49.283227 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 11 10:56:49.376692 master-2 kubenswrapper[4776]: I1011 10:56:49.376602 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bswtb\" (UniqueName: \"kubernetes.io/projected/1e253bd1-27b5-4423-8212-c9e698198d47-kube-api-access-bswtb\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.376910 master-2 kubenswrapper[4776]: I1011 10:56:49.376714 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.376910 master-2 kubenswrapper[4776]: I1011 10:56:49.376774 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-config-data\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.376910 master-2 kubenswrapper[4776]: I1011 10:56:49.376889 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-scripts\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.377013 master-2 kubenswrapper[4776]: I1011 10:56:49.376981 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e253bd1-27b5-4423-8212-c9e698198d47-logs\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.377088 master-2 kubenswrapper[4776]: I1011 10:56:49.377056 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2tgl\" (UniqueName: \"kubernetes.io/projected/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-kube-api-access-p2tgl\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.377130 master-2 kubenswrapper[4776]: I1011 10:56:49.377103 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.377179 master-2 kubenswrapper[4776]: I1011 10:56:49.377135 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-config-data\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.420927 master-2 kubenswrapper[4776]: I1011 10:56:49.420849 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.488894 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e253bd1-27b5-4423-8212-c9e698198d47-logs\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489059 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2tgl\" (UniqueName: \"kubernetes.io/projected/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-kube-api-access-p2tgl\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489086 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489147 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-config-data\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489214 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bswtb\" (UniqueName: \"kubernetes.io/projected/1e253bd1-27b5-4423-8212-c9e698198d47-kube-api-access-bswtb\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489270 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489302 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-config-data\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.489404 master-2 kubenswrapper[4776]: I1011 10:56:49.489358 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-scripts\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.492402 master-2 kubenswrapper[4776]: I1011 10:56:49.492292 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e253bd1-27b5-4423-8212-c9e698198d47-logs\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.501866 master-2 kubenswrapper[4776]: I1011 10:56:49.501188 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-scripts\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.501866 master-2 kubenswrapper[4776]: I1011 10:56:49.501400 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-config-data\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.503202 master-2 kubenswrapper[4776]: I1011 10:56:49.502879 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.503391 master-2 kubenswrapper[4776]: I1011 10:56:49.503144 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-config-data\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.507291 master-2 kubenswrapper[4776]: I1011 10:56:49.504835 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.539829 master-2 kubenswrapper[4776]: I1011 10:56:49.539718 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bswtb\" (UniqueName: \"kubernetes.io/projected/1e253bd1-27b5-4423-8212-c9e698198d47-kube-api-access-bswtb\") pod \"nova-api-2\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " pod="openstack/nova-api-2" Oct 11 10:56:49.546522 master-2 kubenswrapper[4776]: I1011 10:56:49.546481 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2tgl\" (UniqueName: \"kubernetes.io/projected/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-kube-api-access-p2tgl\") pod \"aodh-0\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " pod="openstack/aodh-0" Oct 11 10:56:49.652111 master-2 kubenswrapper[4776]: I1011 10:56:49.652056 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 11 10:56:49.665762 master-2 kubenswrapper[4776]: I1011 10:56:49.665698 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:56:49.867832 master-2 kubenswrapper[4776]: I1011 10:56:49.867522 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"bc39980fe155a215983f5125ab18f03af9b1943c164a6004518235f80c71b417"} Oct 11 10:56:50.075439 master-2 kubenswrapper[4776]: I1011 10:56:50.075389 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a3bc084-f5d9-4e64-9350-d2c3b3487e76" path="/var/lib/kubelet/pods/3a3bc084-f5d9-4e64-9350-d2c3b3487e76/volumes" Oct 11 10:56:50.289233 master-2 kubenswrapper[4776]: I1011 10:56:50.288904 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:56:50.523721 master-2 kubenswrapper[4776]: W1011 10:56:50.523632 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-317d33ecc929d12f2e23fd3b8f482ae3dda6757c15d263ac0d8348bb3993f8df WatchSource:0}: Error finding container 317d33ecc929d12f2e23fd3b8f482ae3dda6757c15d263ac0d8348bb3993f8df: Status 404 returned error can't find the container with id 317d33ecc929d12f2e23fd3b8f482ae3dda6757c15d263ac0d8348bb3993f8df Oct 11 10:56:50.525360 master-2 kubenswrapper[4776]: I1011 10:56:50.525319 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 11 10:56:50.883089 master-2 kubenswrapper[4776]: I1011 10:56:50.882974 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"6f4994c81eacfe212a6b9366f040eee5ed5fea00a06f081dd304d4196753ce98"} Oct 11 10:56:50.883089 master-2 kubenswrapper[4776]: I1011 10:56:50.883066 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"98ff7c8d-cc7c-4b25-917b-88dfa7f837c5","Type":"ContainerStarted","Data":"031ee9bd372b45f4446076c4504a27d6422c72aa7df9a11e5f860b1bc605188d"} Oct 11 10:56:50.883927 master-2 kubenswrapper[4776]: I1011 10:56:50.883554 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Oct 11 10:56:50.883927 master-2 kubenswrapper[4776]: I1011 10:56:50.883796 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Oct 11 10:56:50.885508 master-2 kubenswrapper[4776]: I1011 10:56:50.885424 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerStarted","Data":"317d33ecc929d12f2e23fd3b8f482ae3dda6757c15d263ac0d8348bb3993f8df"} Oct 11 10:56:50.888443 master-2 kubenswrapper[4776]: I1011 10:56:50.888390 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"1e253bd1-27b5-4423-8212-c9e698198d47","Type":"ContainerStarted","Data":"01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70"} Oct 11 10:56:50.888443 master-2 kubenswrapper[4776]: I1011 10:56:50.888437 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"1e253bd1-27b5-4423-8212-c9e698198d47","Type":"ContainerStarted","Data":"756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954"} Oct 11 10:56:50.888563 master-2 kubenswrapper[4776]: I1011 10:56:50.888458 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"1e253bd1-27b5-4423-8212-c9e698198d47","Type":"ContainerStarted","Data":"96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac"} Oct 11 10:56:50.935462 master-2 kubenswrapper[4776]: I1011 10:56:50.935334 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=59.218879773 podStartE2EDuration="1m36.93529401s" podCreationTimestamp="2025-10-11 10:55:14 +0000 UTC" firstStartedPulling="2025-10-11 10:55:28.58384402 +0000 UTC m=+1763.368270729" lastFinishedPulling="2025-10-11 10:56:06.300258237 +0000 UTC m=+1801.084684966" observedRunningTime="2025-10-11 10:56:50.924039595 +0000 UTC m=+1845.708466304" watchObservedRunningTime="2025-10-11 10:56:50.93529401 +0000 UTC m=+1845.719720719" Oct 11 10:56:50.953158 master-2 kubenswrapper[4776]: I1011 10:56:50.953032 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2" podStartSLOduration=2.952967759 podStartE2EDuration="2.952967759s" podCreationTimestamp="2025-10-11 10:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:56:50.947015997 +0000 UTC m=+1845.731442706" watchObservedRunningTime="2025-10-11 10:56:50.952967759 +0000 UTC m=+1845.737394479" Oct 11 10:56:51.898122 master-2 kubenswrapper[4776]: I1011 10:56:51.898077 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Oct 11 10:56:54.092139 master-2 kubenswrapper[4776]: I1011 10:56:54.091982 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 11 10:56:54.928621 master-2 kubenswrapper[4776]: I1011 10:56:54.928556 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerStarted","Data":"e4ba08b3d6a3495897a11277b1fc244a6f1d3e3d1223b0873ad4182dea2a4b01"} Oct 11 10:56:56.955435 master-2 kubenswrapper[4776]: I1011 10:56:56.955377 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerStarted","Data":"326e9fc4149cc880197755466f98e8b8e170fd384dd03ab05ef261cdaf3f4253"} Oct 11 10:56:57.797802 master-2 kubenswrapper[4776]: I1011 10:56:57.797751 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 11 10:56:57.798443 master-2 kubenswrapper[4776]: I1011 10:56:57.798365 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 11 10:56:57.800752 master-2 kubenswrapper[4776]: I1011 10:56:57.800699 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 11 10:56:57.967250 master-2 kubenswrapper[4776]: I1011 10:56:57.967202 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 11 10:56:58.978145 master-2 kubenswrapper[4776]: I1011 10:56:58.978064 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerStarted","Data":"37c9c3f974a6d34524b1afbc2045074821422c1756dbdd22caa6addb90b8625b"} Oct 11 10:56:59.666400 master-2 kubenswrapper[4776]: I1011 10:56:59.666345 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 11 10:56:59.666631 master-2 kubenswrapper[4776]: I1011 10:56:59.666473 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 11 10:57:00.747939 master-2 kubenswrapper[4776]: I1011 10:57:00.747870 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.0.175:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:00.748543 master-2 kubenswrapper[4776]: I1011 10:57:00.748202 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.0.175:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:02.002017 master-2 kubenswrapper[4776]: I1011 10:57:02.001960 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerStarted","Data":"b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9"} Oct 11 10:57:02.002729 master-2 kubenswrapper[4776]: I1011 10:57:02.002107 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-api" containerID="cri-o://e4ba08b3d6a3495897a11277b1fc244a6f1d3e3d1223b0873ad4182dea2a4b01" gracePeriod=30 Oct 11 10:57:02.002729 master-2 kubenswrapper[4776]: I1011 10:57:02.002531 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-listener" containerID="cri-o://b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9" gracePeriod=30 Oct 11 10:57:02.002729 master-2 kubenswrapper[4776]: I1011 10:57:02.002575 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-notifier" containerID="cri-o://37c9c3f974a6d34524b1afbc2045074821422c1756dbdd22caa6addb90b8625b" gracePeriod=30 Oct 11 10:57:02.002729 master-2 kubenswrapper[4776]: I1011 10:57:02.002613 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-evaluator" containerID="cri-o://326e9fc4149cc880197755466f98e8b8e170fd384dd03ab05ef261cdaf3f4253" gracePeriod=30 Oct 11 10:57:02.038099 master-2 kubenswrapper[4776]: I1011 10:57:02.038013 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.540885194 podStartE2EDuration="14.037995912s" podCreationTimestamp="2025-10-11 10:56:48 +0000 UTC" firstStartedPulling="2025-10-11 10:56:50.526443417 +0000 UTC m=+1845.310870126" lastFinishedPulling="2025-10-11 10:57:01.023554135 +0000 UTC m=+1855.807980844" observedRunningTime="2025-10-11 10:57:02.037087326 +0000 UTC m=+1856.821514035" watchObservedRunningTime="2025-10-11 10:57:02.037995912 +0000 UTC m=+1856.822422631" Oct 11 10:57:03.014445 master-2 kubenswrapper[4776]: I1011 10:57:03.014369 4776 generic.go:334] "Generic (PLEG): container finished" podID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerID="326e9fc4149cc880197755466f98e8b8e170fd384dd03ab05ef261cdaf3f4253" exitCode=0 Oct 11 10:57:03.014445 master-2 kubenswrapper[4776]: I1011 10:57:03.014418 4776 generic.go:334] "Generic (PLEG): container finished" podID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerID="e4ba08b3d6a3495897a11277b1fc244a6f1d3e3d1223b0873ad4182dea2a4b01" exitCode=0 Oct 11 10:57:03.014445 master-2 kubenswrapper[4776]: I1011 10:57:03.014442 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerDied","Data":"326e9fc4149cc880197755466f98e8b8e170fd384dd03ab05ef261cdaf3f4253"} Oct 11 10:57:03.015319 master-2 kubenswrapper[4776]: I1011 10:57:03.014474 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerDied","Data":"e4ba08b3d6a3495897a11277b1fc244a6f1d3e3d1223b0873ad4182dea2a4b01"} Oct 11 10:57:04.026091 master-2 kubenswrapper[4776]: I1011 10:57:04.026028 4776 generic.go:334] "Generic (PLEG): container finished" podID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerID="37c9c3f974a6d34524b1afbc2045074821422c1756dbdd22caa6addb90b8625b" exitCode=0 Oct 11 10:57:04.026091 master-2 kubenswrapper[4776]: I1011 10:57:04.026081 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerDied","Data":"37c9c3f974a6d34524b1afbc2045074821422c1756dbdd22caa6addb90b8625b"} Oct 11 10:57:09.670577 master-2 kubenswrapper[4776]: I1011 10:57:09.670435 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-2" Oct 11 10:57:09.671193 master-2 kubenswrapper[4776]: I1011 10:57:09.670991 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-2" Oct 11 10:57:09.673155 master-2 kubenswrapper[4776]: I1011 10:57:09.673104 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-2" Oct 11 10:57:09.673510 master-2 kubenswrapper[4776]: I1011 10:57:09.673432 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-2" Oct 11 10:57:10.076533 master-2 kubenswrapper[4776]: I1011 10:57:10.076380 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-2" Oct 11 10:57:10.079325 master-2 kubenswrapper[4776]: I1011 10:57:10.079280 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-2" Oct 11 10:57:13.219223 master-2 kubenswrapper[4776]: I1011 10:57:13.219155 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:13.219774 master-2 kubenswrapper[4776]: I1011 10:57:13.219406 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-log" containerID="cri-o://756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954" gracePeriod=30 Oct 11 10:57:13.219995 master-2 kubenswrapper[4776]: I1011 10:57:13.219953 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-api" containerID="cri-o://01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70" gracePeriod=30 Oct 11 10:57:14.113333 master-2 kubenswrapper[4776]: I1011 10:57:14.113186 4776 generic.go:334] "Generic (PLEG): container finished" podID="1e253bd1-27b5-4423-8212-c9e698198d47" containerID="756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954" exitCode=143 Oct 11 10:57:14.113333 master-2 kubenswrapper[4776]: I1011 10:57:14.113298 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"1e253bd1-27b5-4423-8212-c9e698198d47","Type":"ContainerDied","Data":"756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954"} Oct 11 10:57:17.154291 master-2 kubenswrapper[4776]: I1011 10:57:17.154153 4776 generic.go:334] "Generic (PLEG): container finished" podID="1e253bd1-27b5-4423-8212-c9e698198d47" containerID="01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70" exitCode=0 Oct 11 10:57:17.154291 master-2 kubenswrapper[4776]: I1011 10:57:17.154206 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"1e253bd1-27b5-4423-8212-c9e698198d47","Type":"ContainerDied","Data":"01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70"} Oct 11 10:57:17.264739 master-2 kubenswrapper[4776]: I1011 10:57:17.264689 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:17.320716 master-2 kubenswrapper[4776]: I1011 10:57:17.320640 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-combined-ca-bundle\") pod \"1e253bd1-27b5-4423-8212-c9e698198d47\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " Oct 11 10:57:17.321412 master-2 kubenswrapper[4776]: I1011 10:57:17.321382 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-config-data\") pod \"1e253bd1-27b5-4423-8212-c9e698198d47\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " Oct 11 10:57:17.321716 master-2 kubenswrapper[4776]: I1011 10:57:17.321659 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bswtb\" (UniqueName: \"kubernetes.io/projected/1e253bd1-27b5-4423-8212-c9e698198d47-kube-api-access-bswtb\") pod \"1e253bd1-27b5-4423-8212-c9e698198d47\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " Oct 11 10:57:17.321915 master-2 kubenswrapper[4776]: I1011 10:57:17.321900 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e253bd1-27b5-4423-8212-c9e698198d47-logs\") pod \"1e253bd1-27b5-4423-8212-c9e698198d47\" (UID: \"1e253bd1-27b5-4423-8212-c9e698198d47\") " Oct 11 10:57:17.323715 master-2 kubenswrapper[4776]: I1011 10:57:17.323665 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e253bd1-27b5-4423-8212-c9e698198d47-logs" (OuterVolumeSpecName: "logs") pod "1e253bd1-27b5-4423-8212-c9e698198d47" (UID: "1e253bd1-27b5-4423-8212-c9e698198d47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:17.331983 master-2 kubenswrapper[4776]: I1011 10:57:17.331901 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e253bd1-27b5-4423-8212-c9e698198d47-kube-api-access-bswtb" (OuterVolumeSpecName: "kube-api-access-bswtb") pod "1e253bd1-27b5-4423-8212-c9e698198d47" (UID: "1e253bd1-27b5-4423-8212-c9e698198d47"). InnerVolumeSpecName "kube-api-access-bswtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:17.348019 master-2 kubenswrapper[4776]: I1011 10:57:17.347956 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-config-data" (OuterVolumeSpecName: "config-data") pod "1e253bd1-27b5-4423-8212-c9e698198d47" (UID: "1e253bd1-27b5-4423-8212-c9e698198d47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:17.349894 master-2 kubenswrapper[4776]: I1011 10:57:17.349839 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e253bd1-27b5-4423-8212-c9e698198d47" (UID: "1e253bd1-27b5-4423-8212-c9e698198d47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:17.425004 master-2 kubenswrapper[4776]: I1011 10:57:17.424918 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e253bd1-27b5-4423-8212-c9e698198d47-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:17.425004 master-2 kubenswrapper[4776]: I1011 10:57:17.424987 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:17.425004 master-2 kubenswrapper[4776]: I1011 10:57:17.424998 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e253bd1-27b5-4423-8212-c9e698198d47-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:17.425004 master-2 kubenswrapper[4776]: I1011 10:57:17.425008 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bswtb\" (UniqueName: \"kubernetes.io/projected/1e253bd1-27b5-4423-8212-c9e698198d47-kube-api-access-bswtb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:18.164297 master-2 kubenswrapper[4776]: I1011 10:57:18.164226 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"1e253bd1-27b5-4423-8212-c9e698198d47","Type":"ContainerDied","Data":"96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac"} Oct 11 10:57:18.164297 master-2 kubenswrapper[4776]: I1011 10:57:18.164306 4776 scope.go:117] "RemoveContainer" containerID="01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70" Oct 11 10:57:18.165145 master-2 kubenswrapper[4776]: I1011 10:57:18.164604 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:18.189259 master-2 kubenswrapper[4776]: I1011 10:57:18.189201 4776 scope.go:117] "RemoveContainer" containerID="756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954" Oct 11 10:57:18.212644 master-2 kubenswrapper[4776]: I1011 10:57:18.212572 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:18.225068 master-2 kubenswrapper[4776]: I1011 10:57:18.224990 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:18.275999 master-2 kubenswrapper[4776]: I1011 10:57:18.275903 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:18.276712 master-2 kubenswrapper[4776]: E1011 10:57:18.276668 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-log" Oct 11 10:57:18.276712 master-2 kubenswrapper[4776]: I1011 10:57:18.276711 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-log" Oct 11 10:57:18.276837 master-2 kubenswrapper[4776]: E1011 10:57:18.276774 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-api" Oct 11 10:57:18.276837 master-2 kubenswrapper[4776]: I1011 10:57:18.276783 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-api" Oct 11 10:57:18.277905 master-2 kubenswrapper[4776]: I1011 10:57:18.277780 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-api" Oct 11 10:57:18.277905 master-2 kubenswrapper[4776]: I1011 10:57:18.277905 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" containerName="nova-api-log" Oct 11 10:57:18.280845 master-2 kubenswrapper[4776]: I1011 10:57:18.280761 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:18.284841 master-2 kubenswrapper[4776]: I1011 10:57:18.284772 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 11 10:57:18.285173 master-2 kubenswrapper[4776]: I1011 10:57:18.285125 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:57:18.285601 master-2 kubenswrapper[4776]: I1011 10:57:18.285480 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 11 10:57:18.292600 master-2 kubenswrapper[4776]: I1011 10:57:18.292524 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:18.471000 master-2 kubenswrapper[4776]: I1011 10:57:18.470581 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.471000 master-2 kubenswrapper[4776]: I1011 10:57:18.470657 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-internal-tls-certs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.471000 master-2 kubenswrapper[4776]: I1011 10:57:18.470908 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a4feec2-a4ba-4906-90f7-0912fc708375-logs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.471000 master-2 kubenswrapper[4776]: I1011 10:57:18.470949 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-public-tls-certs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.471000 master-2 kubenswrapper[4776]: I1011 10:57:18.470975 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z75mf\" (UniqueName: \"kubernetes.io/projected/3a4feec2-a4ba-4906-90f7-0912fc708375-kube-api-access-z75mf\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.471817 master-2 kubenswrapper[4776]: I1011 10:57:18.471130 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-config-data\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.572591 master-2 kubenswrapper[4776]: I1011 10:57:18.572518 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-internal-tls-certs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.572591 master-2 kubenswrapper[4776]: I1011 10:57:18.572580 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.572927 master-2 kubenswrapper[4776]: I1011 10:57:18.572696 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a4feec2-a4ba-4906-90f7-0912fc708375-logs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.572927 master-2 kubenswrapper[4776]: I1011 10:57:18.572724 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-public-tls-certs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.572927 master-2 kubenswrapper[4776]: I1011 10:57:18.572741 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z75mf\" (UniqueName: \"kubernetes.io/projected/3a4feec2-a4ba-4906-90f7-0912fc708375-kube-api-access-z75mf\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.572927 master-2 kubenswrapper[4776]: I1011 10:57:18.572785 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-config-data\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.574423 master-2 kubenswrapper[4776]: I1011 10:57:18.574354 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a4feec2-a4ba-4906-90f7-0912fc708375-logs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.576458 master-2 kubenswrapper[4776]: I1011 10:57:18.576409 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-internal-tls-certs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.576776 master-2 kubenswrapper[4776]: I1011 10:57:18.576745 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-config-data\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.576827 master-2 kubenswrapper[4776]: I1011 10:57:18.576744 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-public-tls-certs\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.576978 master-2 kubenswrapper[4776]: I1011 10:57:18.576909 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.595927 master-2 kubenswrapper[4776]: I1011 10:57:18.595824 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z75mf\" (UniqueName: \"kubernetes.io/projected/3a4feec2-a4ba-4906-90f7-0912fc708375-kube-api-access-z75mf\") pod \"nova-api-2\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " pod="openstack/nova-api-2" Oct 11 10:57:18.659870 master-2 kubenswrapper[4776]: I1011 10:57:18.659759 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:19.007663 master-2 kubenswrapper[4776]: I1011 10:57:19.006580 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:19.007663 master-2 kubenswrapper[4776]: I1011 10:57:19.006849 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" containerName="nova-scheduler-scheduler" containerID="cri-o://c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554" gracePeriod=30 Oct 11 10:57:19.022443 master-2 kubenswrapper[4776]: I1011 10:57:19.020891 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Oct 11 10:57:19.063661 master-2 kubenswrapper[4776]: I1011 10:57:19.061157 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Oct 11 10:57:19.064761 master-2 kubenswrapper[4776]: I1011 10:57:19.064708 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Oct 11 10:57:19.108576 master-2 kubenswrapper[4776]: I1011 10:57:19.108520 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:19.181338 master-2 kubenswrapper[4776]: I1011 10:57:19.181296 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a4feec2-a4ba-4906-90f7-0912fc708375","Type":"ContainerStarted","Data":"6503028ff1c30fc40e811d73cf5d9d6f0477fe67e3a05d72d99b498521268894"} Oct 11 10:57:20.066873 master-2 kubenswrapper[4776]: I1011 10:57:20.066811 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e253bd1-27b5-4423-8212-c9e698198d47" path="/var/lib/kubelet/pods/1e253bd1-27b5-4423-8212-c9e698198d47/volumes" Oct 11 10:57:20.191277 master-2 kubenswrapper[4776]: I1011 10:57:20.191102 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a4feec2-a4ba-4906-90f7-0912fc708375","Type":"ContainerStarted","Data":"8fad6ef13ce32e48a0200e0d51e6ec1869e31fdbae9e43f6b4327a3848b3263c"} Oct 11 10:57:20.191277 master-2 kubenswrapper[4776]: I1011 10:57:20.191162 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a4feec2-a4ba-4906-90f7-0912fc708375","Type":"ContainerStarted","Data":"fef38235c13a94e0f559cc147b180218c57bb86edee562a4d0f5a4ef26d4a256"} Oct 11 10:57:20.231701 master-2 kubenswrapper[4776]: I1011 10:57:20.227311 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2" podStartSLOduration=2.227287888 podStartE2EDuration="2.227287888s" podCreationTimestamp="2025-10-11 10:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:20.221317116 +0000 UTC m=+1875.005743835" watchObservedRunningTime="2025-10-11 10:57:20.227287888 +0000 UTC m=+1875.011714607" Oct 11 10:57:21.202146 master-2 kubenswrapper[4776]: I1011 10:57:21.201008 4776 generic.go:334] "Generic (PLEG): container finished" podID="9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" containerID="c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554" exitCode=0 Oct 11 10:57:21.202146 master-2 kubenswrapper[4776]: I1011 10:57:21.201320 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b","Type":"ContainerDied","Data":"c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554"} Oct 11 10:57:21.862697 master-2 kubenswrapper[4776]: I1011 10:57:21.862643 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:57:21.959381 master-2 kubenswrapper[4776]: I1011 10:57:21.959315 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-combined-ca-bundle\") pod \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " Oct 11 10:57:21.959784 master-2 kubenswrapper[4776]: I1011 10:57:21.959741 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-config-data\") pod \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " Oct 11 10:57:21.960339 master-2 kubenswrapper[4776]: I1011 10:57:21.960305 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzwk6\" (UniqueName: \"kubernetes.io/projected/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-kube-api-access-hzwk6\") pod \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\" (UID: \"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b\") " Oct 11 10:57:21.967365 master-2 kubenswrapper[4776]: I1011 10:57:21.967304 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-kube-api-access-hzwk6" (OuterVolumeSpecName: "kube-api-access-hzwk6") pod "9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" (UID: "9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b"). InnerVolumeSpecName "kube-api-access-hzwk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:21.988612 master-2 kubenswrapper[4776]: I1011 10:57:21.986994 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" (UID: "9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:21.997728 master-2 kubenswrapper[4776]: I1011 10:57:21.997474 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-config-data" (OuterVolumeSpecName: "config-data") pod "9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" (UID: "9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:22.064030 master-2 kubenswrapper[4776]: I1011 10:57:22.063111 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:22.064030 master-2 kubenswrapper[4776]: I1011 10:57:22.063148 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:22.064030 master-2 kubenswrapper[4776]: I1011 10:57:22.063157 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzwk6\" (UniqueName: \"kubernetes.io/projected/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b-kube-api-access-hzwk6\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:22.221574 master-2 kubenswrapper[4776]: I1011 10:57:22.218489 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b","Type":"ContainerDied","Data":"443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c"} Oct 11 10:57:22.221574 master-2 kubenswrapper[4776]: I1011 10:57:22.218590 4776 scope.go:117] "RemoveContainer" containerID="c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554" Oct 11 10:57:22.221574 master-2 kubenswrapper[4776]: I1011 10:57:22.218878 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:57:22.258130 master-2 kubenswrapper[4776]: I1011 10:57:22.258054 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:22.265856 master-2 kubenswrapper[4776]: I1011 10:57:22.265805 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:22.281462 master-2 kubenswrapper[4776]: I1011 10:57:22.281402 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:22.281905 master-2 kubenswrapper[4776]: E1011 10:57:22.281874 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" containerName="nova-scheduler-scheduler" Oct 11 10:57:22.281905 master-2 kubenswrapper[4776]: I1011 10:57:22.281894 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" containerName="nova-scheduler-scheduler" Oct 11 10:57:22.282127 master-2 kubenswrapper[4776]: I1011 10:57:22.282100 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" containerName="nova-scheduler-scheduler" Oct 11 10:57:22.282902 master-2 kubenswrapper[4776]: I1011 10:57:22.282876 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:57:22.288748 master-2 kubenswrapper[4776]: I1011 10:57:22.288706 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 10:57:22.307641 master-2 kubenswrapper[4776]: I1011 10:57:22.307534 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:22.367786 master-2 kubenswrapper[4776]: I1011 10:57:22.367733 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwj7l\" (UniqueName: \"kubernetes.io/projected/69f600e5-9321-41ce-9ec4-abee215f69fe-kube-api-access-mwj7l\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.368010 master-2 kubenswrapper[4776]: I1011 10:57:22.367855 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.368010 master-2 kubenswrapper[4776]: I1011 10:57:22.367936 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-config-data\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.471983 master-2 kubenswrapper[4776]: I1011 10:57:22.471877 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-config-data\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.472710 master-2 kubenswrapper[4776]: I1011 10:57:22.472650 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwj7l\" (UniqueName: \"kubernetes.io/projected/69f600e5-9321-41ce-9ec4-abee215f69fe-kube-api-access-mwj7l\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.472831 master-2 kubenswrapper[4776]: I1011 10:57:22.472813 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.480578 master-2 kubenswrapper[4776]: I1011 10:57:22.480543 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-config-data\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.480746 master-2 kubenswrapper[4776]: I1011 10:57:22.480657 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.494965 master-2 kubenswrapper[4776]: I1011 10:57:22.494765 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwj7l\" (UniqueName: \"kubernetes.io/projected/69f600e5-9321-41ce-9ec4-abee215f69fe-kube-api-access-mwj7l\") pod \"nova-scheduler-0\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " pod="openstack/nova-scheduler-0" Oct 11 10:57:22.601978 master-2 kubenswrapper[4776]: I1011 10:57:22.601816 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:57:23.111941 master-2 kubenswrapper[4776]: I1011 10:57:23.111850 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:23.229651 master-2 kubenswrapper[4776]: I1011 10:57:23.229588 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69f600e5-9321-41ce-9ec4-abee215f69fe","Type":"ContainerStarted","Data":"bb574ec91c838b585beb61a822a5e96518597aaa45bd1ce4b4d86481bf2e5fb7"} Oct 11 10:57:24.068759 master-2 kubenswrapper[4776]: I1011 10:57:24.068698 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b" path="/var/lib/kubelet/pods/9bb7e6f8-0f63-47b1-b46c-8ae9fbc3fe8b/volumes" Oct 11 10:57:24.243245 master-2 kubenswrapper[4776]: I1011 10:57:24.243145 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69f600e5-9321-41ce-9ec4-abee215f69fe","Type":"ContainerStarted","Data":"a1ea9dd039fb792b9b4194ff7606c452151d402cebba8363c5c12a6a1d82ba1c"} Oct 11 10:57:24.276485 master-2 kubenswrapper[4776]: I1011 10:57:24.276388 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.276369935 podStartE2EDuration="2.276369935s" podCreationTimestamp="2025-10-11 10:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:24.267206017 +0000 UTC m=+1879.051632726" watchObservedRunningTime="2025-10-11 10:57:24.276369935 +0000 UTC m=+1879.060796644" Oct 11 10:57:27.320643 master-2 kubenswrapper[4776]: I1011 10:57:27.320512 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:27.322539 master-2 kubenswrapper[4776]: I1011 10:57:27.321287 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-api" containerID="cri-o://8fad6ef13ce32e48a0200e0d51e6ec1869e31fdbae9e43f6b4327a3848b3263c" gracePeriod=30 Oct 11 10:57:27.322539 master-2 kubenswrapper[4776]: I1011 10:57:27.321697 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-log" containerID="cri-o://fef38235c13a94e0f559cc147b180218c57bb86edee562a4d0f5a4ef26d4a256" gracePeriod=30 Oct 11 10:57:27.602023 master-2 kubenswrapper[4776]: I1011 10:57:27.601970 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 11 10:57:28.276957 master-2 kubenswrapper[4776]: I1011 10:57:28.276905 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerID="8fad6ef13ce32e48a0200e0d51e6ec1869e31fdbae9e43f6b4327a3848b3263c" exitCode=0 Oct 11 10:57:28.276957 master-2 kubenswrapper[4776]: I1011 10:57:28.276940 4776 generic.go:334] "Generic (PLEG): container finished" podID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerID="fef38235c13a94e0f559cc147b180218c57bb86edee562a4d0f5a4ef26d4a256" exitCode=143 Oct 11 10:57:28.276957 master-2 kubenswrapper[4776]: I1011 10:57:28.276960 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a4feec2-a4ba-4906-90f7-0912fc708375","Type":"ContainerDied","Data":"8fad6ef13ce32e48a0200e0d51e6ec1869e31fdbae9e43f6b4327a3848b3263c"} Oct 11 10:57:28.277236 master-2 kubenswrapper[4776]: I1011 10:57:28.276987 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a4feec2-a4ba-4906-90f7-0912fc708375","Type":"ContainerDied","Data":"fef38235c13a94e0f559cc147b180218c57bb86edee562a4d0f5a4ef26d4a256"} Oct 11 10:57:28.277236 master-2 kubenswrapper[4776]: I1011 10:57:28.277010 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"3a4feec2-a4ba-4906-90f7-0912fc708375","Type":"ContainerDied","Data":"6503028ff1c30fc40e811d73cf5d9d6f0477fe67e3a05d72d99b498521268894"} Oct 11 10:57:28.277236 master-2 kubenswrapper[4776]: I1011 10:57:28.277021 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6503028ff1c30fc40e811d73cf5d9d6f0477fe67e3a05d72d99b498521268894" Oct 11 10:57:28.334559 master-2 kubenswrapper[4776]: I1011 10:57:28.334508 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:28.489086 master-2 kubenswrapper[4776]: I1011 10:57:28.489010 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-config-data\") pod \"3a4feec2-a4ba-4906-90f7-0912fc708375\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " Oct 11 10:57:28.489086 master-2 kubenswrapper[4776]: I1011 10:57:28.489081 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a4feec2-a4ba-4906-90f7-0912fc708375-logs\") pod \"3a4feec2-a4ba-4906-90f7-0912fc708375\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " Oct 11 10:57:28.489386 master-2 kubenswrapper[4776]: I1011 10:57:28.489211 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-internal-tls-certs\") pod \"3a4feec2-a4ba-4906-90f7-0912fc708375\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " Oct 11 10:57:28.489386 master-2 kubenswrapper[4776]: I1011 10:57:28.489242 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-combined-ca-bundle\") pod \"3a4feec2-a4ba-4906-90f7-0912fc708375\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " Oct 11 10:57:28.489386 master-2 kubenswrapper[4776]: I1011 10:57:28.489329 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-public-tls-certs\") pod \"3a4feec2-a4ba-4906-90f7-0912fc708375\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " Oct 11 10:57:28.489506 master-2 kubenswrapper[4776]: I1011 10:57:28.489446 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z75mf\" (UniqueName: \"kubernetes.io/projected/3a4feec2-a4ba-4906-90f7-0912fc708375-kube-api-access-z75mf\") pod \"3a4feec2-a4ba-4906-90f7-0912fc708375\" (UID: \"3a4feec2-a4ba-4906-90f7-0912fc708375\") " Oct 11 10:57:28.490001 master-2 kubenswrapper[4776]: I1011 10:57:28.489434 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a4feec2-a4ba-4906-90f7-0912fc708375-logs" (OuterVolumeSpecName: "logs") pod "3a4feec2-a4ba-4906-90f7-0912fc708375" (UID: "3a4feec2-a4ba-4906-90f7-0912fc708375"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:57:28.494474 master-2 kubenswrapper[4776]: I1011 10:57:28.494415 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a4feec2-a4ba-4906-90f7-0912fc708375-kube-api-access-z75mf" (OuterVolumeSpecName: "kube-api-access-z75mf") pod "3a4feec2-a4ba-4906-90f7-0912fc708375" (UID: "3a4feec2-a4ba-4906-90f7-0912fc708375"). InnerVolumeSpecName "kube-api-access-z75mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:28.512825 master-2 kubenswrapper[4776]: I1011 10:57:28.512685 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a4feec2-a4ba-4906-90f7-0912fc708375" (UID: "3a4feec2-a4ba-4906-90f7-0912fc708375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:28.517764 master-2 kubenswrapper[4776]: I1011 10:57:28.517708 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-config-data" (OuterVolumeSpecName: "config-data") pod "3a4feec2-a4ba-4906-90f7-0912fc708375" (UID: "3a4feec2-a4ba-4906-90f7-0912fc708375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:28.535088 master-2 kubenswrapper[4776]: I1011 10:57:28.534660 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3a4feec2-a4ba-4906-90f7-0912fc708375" (UID: "3a4feec2-a4ba-4906-90f7-0912fc708375"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:28.537923 master-2 kubenswrapper[4776]: I1011 10:57:28.537871 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3a4feec2-a4ba-4906-90f7-0912fc708375" (UID: "3a4feec2-a4ba-4906-90f7-0912fc708375"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:28.591464 master-2 kubenswrapper[4776]: I1011 10:57:28.591422 4776 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-internal-tls-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:28.591464 master-2 kubenswrapper[4776]: I1011 10:57:28.591463 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:28.591464 master-2 kubenswrapper[4776]: I1011 10:57:28.591473 4776 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-public-tls-certs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:28.591718 master-2 kubenswrapper[4776]: I1011 10:57:28.591482 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z75mf\" (UniqueName: \"kubernetes.io/projected/3a4feec2-a4ba-4906-90f7-0912fc708375-kube-api-access-z75mf\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:28.591718 master-2 kubenswrapper[4776]: I1011 10:57:28.591539 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4feec2-a4ba-4906-90f7-0912fc708375-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:28.591718 master-2 kubenswrapper[4776]: I1011 10:57:28.591547 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a4feec2-a4ba-4906-90f7-0912fc708375-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:29.284571 master-2 kubenswrapper[4776]: I1011 10:57:29.284518 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:29.331518 master-2 kubenswrapper[4776]: I1011 10:57:29.331454 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:29.338008 master-2 kubenswrapper[4776]: I1011 10:57:29.337957 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:29.382211 master-2 kubenswrapper[4776]: I1011 10:57:29.382149 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:29.383009 master-2 kubenswrapper[4776]: E1011 10:57:29.382960 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-api" Oct 11 10:57:29.383009 master-2 kubenswrapper[4776]: I1011 10:57:29.383006 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-api" Oct 11 10:57:29.383147 master-2 kubenswrapper[4776]: E1011 10:57:29.383034 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-log" Oct 11 10:57:29.383147 master-2 kubenswrapper[4776]: I1011 10:57:29.383049 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-log" Oct 11 10:57:29.383721 master-2 kubenswrapper[4776]: I1011 10:57:29.383618 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-api" Oct 11 10:57:29.383794 master-2 kubenswrapper[4776]: I1011 10:57:29.383742 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" containerName="nova-api-log" Oct 11 10:57:29.388908 master-2 kubenswrapper[4776]: I1011 10:57:29.388834 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:29.392377 master-2 kubenswrapper[4776]: I1011 10:57:29.392126 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 10:57:29.392377 master-2 kubenswrapper[4776]: I1011 10:57:29.392351 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 11 10:57:29.392690 master-2 kubenswrapper[4776]: I1011 10:57:29.392481 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 11 10:57:29.401482 master-2 kubenswrapper[4776]: I1011 10:57:29.401263 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:29.505189 master-2 kubenswrapper[4776]: I1011 10:57:29.505084 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb645f14-616d-425d-ae7d-5475565669f8-logs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.505454 master-2 kubenswrapper[4776]: I1011 10:57:29.505202 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvt9m\" (UniqueName: \"kubernetes.io/projected/bb645f14-616d-425d-ae7d-5475565669f8-kube-api-access-jvt9m\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.505454 master-2 kubenswrapper[4776]: I1011 10:57:29.505253 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.505454 master-2 kubenswrapper[4776]: I1011 10:57:29.505366 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-config-data\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.505454 master-2 kubenswrapper[4776]: I1011 10:57:29.505425 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-public-tls-certs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.505986 master-2 kubenswrapper[4776]: I1011 10:57:29.505474 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-internal-tls-certs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.607650 master-2 kubenswrapper[4776]: I1011 10:57:29.607461 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-config-data\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.607869 master-2 kubenswrapper[4776]: I1011 10:57:29.607595 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-public-tls-certs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.607869 master-2 kubenswrapper[4776]: I1011 10:57:29.607775 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-internal-tls-certs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.608096 master-2 kubenswrapper[4776]: I1011 10:57:29.608059 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb645f14-616d-425d-ae7d-5475565669f8-logs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.608840 master-2 kubenswrapper[4776]: I1011 10:57:29.608802 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvt9m\" (UniqueName: \"kubernetes.io/projected/bb645f14-616d-425d-ae7d-5475565669f8-kube-api-access-jvt9m\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.608840 master-2 kubenswrapper[4776]: I1011 10:57:29.608840 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.608840 master-2 kubenswrapper[4776]: I1011 10:57:29.608708 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb645f14-616d-425d-ae7d-5475565669f8-logs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.611425 master-2 kubenswrapper[4776]: I1011 10:57:29.611391 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-internal-tls-certs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.611568 master-2 kubenswrapper[4776]: I1011 10:57:29.611535 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-public-tls-certs\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.612527 master-2 kubenswrapper[4776]: I1011 10:57:29.612492 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.613978 master-2 kubenswrapper[4776]: I1011 10:57:29.613930 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb645f14-616d-425d-ae7d-5475565669f8-config-data\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.631011 master-2 kubenswrapper[4776]: I1011 10:57:29.630858 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvt9m\" (UniqueName: \"kubernetes.io/projected/bb645f14-616d-425d-ae7d-5475565669f8-kube-api-access-jvt9m\") pod \"nova-api-2\" (UID: \"bb645f14-616d-425d-ae7d-5475565669f8\") " pod="openstack/nova-api-2" Oct 11 10:57:29.711877 master-2 kubenswrapper[4776]: I1011 10:57:29.711797 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 11 10:57:30.074515 master-2 kubenswrapper[4776]: I1011 10:57:30.074445 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a4feec2-a4ba-4906-90f7-0912fc708375" path="/var/lib/kubelet/pods/3a4feec2-a4ba-4906-90f7-0912fc708375/volumes" Oct 11 10:57:30.149387 master-2 kubenswrapper[4776]: I1011 10:57:30.149206 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 11 10:57:30.292514 master-2 kubenswrapper[4776]: I1011 10:57:30.292448 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"bb645f14-616d-425d-ae7d-5475565669f8","Type":"ContainerStarted","Data":"7ef456ed31622c073c43a52e6023552ab07b72c1c1590dab0764c3f6b523bfe0"} Oct 11 10:57:30.292514 master-2 kubenswrapper[4776]: I1011 10:57:30.292503 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"bb645f14-616d-425d-ae7d-5475565669f8","Type":"ContainerStarted","Data":"46090c8fd9d1e9147775f8b19377ed02da1a8e89c02f5e59811e92adf89b80fe"} Oct 11 10:57:31.302014 master-2 kubenswrapper[4776]: I1011 10:57:31.301965 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"bb645f14-616d-425d-ae7d-5475565669f8","Type":"ContainerStarted","Data":"58b9f677007e17c15343e0f132c977a59571d3a476d7cd1ee8cb0938902307a6"} Oct 11 10:57:31.443853 master-2 kubenswrapper[4776]: I1011 10:57:31.443743 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2" podStartSLOduration=2.443720597 podStartE2EDuration="2.443720597s" podCreationTimestamp="2025-10-11 10:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:57:31.332429434 +0000 UTC m=+1886.116856163" watchObservedRunningTime="2025-10-11 10:57:31.443720597 +0000 UTC m=+1886.228147306" Oct 11 10:57:31.445316 master-2 kubenswrapper[4776]: I1011 10:57:31.445268 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-xmqbr"] Oct 11 10:57:31.445547 master-2 kubenswrapper[4776]: I1011 10:57:31.445518 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerName="dnsmasq-dns" containerID="cri-o://a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142" gracePeriod=10 Oct 11 10:57:32.212380 master-2 kubenswrapper[4776]: E1011 10:57:32.212277 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-conmon-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.212856 master-2 kubenswrapper[4776]: E1011 10:57:32.212329 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-conmon-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.212856 master-2 kubenswrapper[4776]: E1011 10:57:32.212614 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-conmon-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-conmon-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.213071 master-2 kubenswrapper[4776]: E1011 10:57:32.212960 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-conmon-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-conmon-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.223485 master-2 kubenswrapper[4776]: E1011 10:57:32.223236 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-conmon-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-conmon-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.224362 master-2 kubenswrapper[4776]: E1011 10:57:32.223705 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-conmon-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.231773 master-2 kubenswrapper[4776]: E1011 10:57:32.229548 4776 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-conmon-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-conmon-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-443bca660f4af73bca6744e7e355caaadc56f7779ac49304e17e54dc85a7286c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea24dce_b1ed_4d57_bd23_f74edc2df1c3.slice/crio-b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ef2dc9_3563_4188_8d91_2fc18c396a4a.slice/crio-a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-96fd4fc611b6a4bbbf2d990ec6feda8d8e8df9da67b88b7ad2b8d15e7527f7ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb7e6f8_0f63_47b1_b46c_8ae9fbc3fe8b.slice/crio-c6785f07b98204327115242971db8c73143b88af6ee276263200abe932a87554.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-756446384fdb9e2177b8e63b15bf18a93f7d326b0bab63a1c06b6935a4e9d954.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e253bd1_27b5_4423_8212_c9e698198d47.slice/crio-conmon-01f033e235c275972467ea99ca7401f62282c5a2012255cf51ba798d95e44b70.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:57:32.314040 master-2 kubenswrapper[4776]: I1011 10:57:32.313780 4776 generic.go:334] "Generic (PLEG): container finished" podID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerID="b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9" exitCode=137 Oct 11 10:57:32.314040 master-2 kubenswrapper[4776]: I1011 10:57:32.313851 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerDied","Data":"b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9"} Oct 11 10:57:32.316220 master-2 kubenswrapper[4776]: I1011 10:57:32.315982 4776 generic.go:334] "Generic (PLEG): container finished" podID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerID="a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142" exitCode=0 Oct 11 10:57:32.316220 master-2 kubenswrapper[4776]: I1011 10:57:32.316049 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" event={"ID":"43ef2dc9-3563-4188-8d91-2fc18c396a4a","Type":"ContainerDied","Data":"a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142"} Oct 11 10:57:32.603199 master-2 kubenswrapper[4776]: I1011 10:57:32.602193 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 11 10:57:32.651860 master-2 kubenswrapper[4776]: I1011 10:57:32.651815 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:57:32.664966 master-2 kubenswrapper[4776]: I1011 10:57:32.664920 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 11 10:57:32.782888 master-2 kubenswrapper[4776]: I1011 10:57:32.782821 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-sb\") pod \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " Oct 11 10:57:32.783267 master-2 kubenswrapper[4776]: I1011 10:57:32.782908 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-nb\") pod \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " Oct 11 10:57:32.783267 master-2 kubenswrapper[4776]: I1011 10:57:32.782994 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcqbh\" (UniqueName: \"kubernetes.io/projected/43ef2dc9-3563-4188-8d91-2fc18c396a4a-kube-api-access-xcqbh\") pod \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " Oct 11 10:57:32.783267 master-2 kubenswrapper[4776]: I1011 10:57:32.783062 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-svc\") pod \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " Oct 11 10:57:32.783267 master-2 kubenswrapper[4776]: I1011 10:57:32.783103 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-swift-storage-0\") pod \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " Oct 11 10:57:32.783267 master-2 kubenswrapper[4776]: I1011 10:57:32.783124 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-config\") pod \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\" (UID: \"43ef2dc9-3563-4188-8d91-2fc18c396a4a\") " Oct 11 10:57:32.799135 master-2 kubenswrapper[4776]: I1011 10:57:32.799076 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ef2dc9-3563-4188-8d91-2fc18c396a4a-kube-api-access-xcqbh" (OuterVolumeSpecName: "kube-api-access-xcqbh") pod "43ef2dc9-3563-4188-8d91-2fc18c396a4a" (UID: "43ef2dc9-3563-4188-8d91-2fc18c396a4a"). InnerVolumeSpecName "kube-api-access-xcqbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:32.827561 master-2 kubenswrapper[4776]: I1011 10:57:32.827500 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "43ef2dc9-3563-4188-8d91-2fc18c396a4a" (UID: "43ef2dc9-3563-4188-8d91-2fc18c396a4a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:32.842748 master-2 kubenswrapper[4776]: I1011 10:57:32.842604 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "43ef2dc9-3563-4188-8d91-2fc18c396a4a" (UID: "43ef2dc9-3563-4188-8d91-2fc18c396a4a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:32.845651 master-2 kubenswrapper[4776]: I1011 10:57:32.845562 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "43ef2dc9-3563-4188-8d91-2fc18c396a4a" (UID: "43ef2dc9-3563-4188-8d91-2fc18c396a4a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:32.854656 master-2 kubenswrapper[4776]: I1011 10:57:32.854591 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-config" (OuterVolumeSpecName: "config") pod "43ef2dc9-3563-4188-8d91-2fc18c396a4a" (UID: "43ef2dc9-3563-4188-8d91-2fc18c396a4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:32.861280 master-2 kubenswrapper[4776]: I1011 10:57:32.861224 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43ef2dc9-3563-4188-8d91-2fc18c396a4a" (UID: "43ef2dc9-3563-4188-8d91-2fc18c396a4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:57:32.886184 master-2 kubenswrapper[4776]: I1011 10:57:32.886076 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:32.886184 master-2 kubenswrapper[4776]: I1011 10:57:32.886124 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-config\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:32.886184 master-2 kubenswrapper[4776]: I1011 10:57:32.886135 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:32.886184 master-2 kubenswrapper[4776]: I1011 10:57:32.886143 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:32.886184 master-2 kubenswrapper[4776]: I1011 10:57:32.886151 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcqbh\" (UniqueName: \"kubernetes.io/projected/43ef2dc9-3563-4188-8d91-2fc18c396a4a-kube-api-access-xcqbh\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:32.886184 master-2 kubenswrapper[4776]: I1011 10:57:32.886159 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43ef2dc9-3563-4188-8d91-2fc18c396a4a-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:32.968192 master-2 kubenswrapper[4776]: I1011 10:57:32.968148 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 11 10:57:33.088207 master-2 kubenswrapper[4776]: I1011 10:57:33.088162 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2tgl\" (UniqueName: \"kubernetes.io/projected/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-kube-api-access-p2tgl\") pod \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " Oct 11 10:57:33.088508 master-2 kubenswrapper[4776]: I1011 10:57:33.088489 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-scripts\") pod \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " Oct 11 10:57:33.088823 master-2 kubenswrapper[4776]: I1011 10:57:33.088805 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-config-data\") pod \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " Oct 11 10:57:33.088995 master-2 kubenswrapper[4776]: I1011 10:57:33.088977 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-combined-ca-bundle\") pod \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\" (UID: \"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3\") " Oct 11 10:57:33.090819 master-2 kubenswrapper[4776]: I1011 10:57:33.090791 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-kube-api-access-p2tgl" (OuterVolumeSpecName: "kube-api-access-p2tgl") pod "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" (UID: "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3"). InnerVolumeSpecName "kube-api-access-p2tgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:57:33.091196 master-2 kubenswrapper[4776]: I1011 10:57:33.091149 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-scripts" (OuterVolumeSpecName: "scripts") pod "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" (UID: "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:33.173298 master-2 kubenswrapper[4776]: I1011 10:57:33.173244 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" (UID: "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:33.191009 master-2 kubenswrapper[4776]: I1011 10:57:33.190956 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-config-data" (OuterVolumeSpecName: "config-data") pod "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" (UID: "7ea24dce-b1ed-4d57-bd23-f74edc2df1c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:57:33.191297 master-2 kubenswrapper[4776]: I1011 10:57:33.191249 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:33.191359 master-2 kubenswrapper[4776]: I1011 10:57:33.191296 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:33.191359 master-2 kubenswrapper[4776]: I1011 10:57:33.191313 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2tgl\" (UniqueName: \"kubernetes.io/projected/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-kube-api-access-p2tgl\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:33.191359 master-2 kubenswrapper[4776]: I1011 10:57:33.191325 4776 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3-scripts\") on node \"master-2\" DevicePath \"\"" Oct 11 10:57:33.326274 master-2 kubenswrapper[4776]: I1011 10:57:33.326213 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" event={"ID":"43ef2dc9-3563-4188-8d91-2fc18c396a4a","Type":"ContainerDied","Data":"b2468f50ae0c430498d74cf1413c34467e7f27dc91b6590e32b6437ebfd5f4be"} Oct 11 10:57:33.326274 master-2 kubenswrapper[4776]: I1011 10:57:33.326223 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cbf74f6f-xmqbr" Oct 11 10:57:33.326999 master-2 kubenswrapper[4776]: I1011 10:57:33.326284 4776 scope.go:117] "RemoveContainer" containerID="a0f2ded888fa5168bfe5ee7ea6d432c9821909501add70a5e380df624bee8142" Oct 11 10:57:33.329804 master-2 kubenswrapper[4776]: I1011 10:57:33.329758 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7ea24dce-b1ed-4d57-bd23-f74edc2df1c3","Type":"ContainerDied","Data":"317d33ecc929d12f2e23fd3b8f482ae3dda6757c15d263ac0d8348bb3993f8df"} Oct 11 10:57:33.329934 master-2 kubenswrapper[4776]: I1011 10:57:33.329908 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 11 10:57:33.347536 master-2 kubenswrapper[4776]: I1011 10:57:33.347135 4776 scope.go:117] "RemoveContainer" containerID="c248f341904d5a6b165429ebe51185bc10d8bbf637b7b5baa2593c4ecc482b79" Oct 11 10:57:33.358630 master-2 kubenswrapper[4776]: I1011 10:57:33.358586 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 11 10:57:33.369754 master-2 kubenswrapper[4776]: I1011 10:57:33.369694 4776 scope.go:117] "RemoveContainer" containerID="b589c48ece6f09223d4e09b52752ed74e569b5980465a50a09d98ca7348faba9" Oct 11 10:57:33.377565 master-2 kubenswrapper[4776]: I1011 10:57:33.377531 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-xmqbr"] Oct 11 10:57:33.385352 master-2 kubenswrapper[4776]: I1011 10:57:33.385311 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79cbf74f6f-xmqbr"] Oct 11 10:57:33.413103 master-2 kubenswrapper[4776]: I1011 10:57:33.413010 4776 scope.go:117] "RemoveContainer" containerID="37c9c3f974a6d34524b1afbc2045074821422c1756dbdd22caa6addb90b8625b" Oct 11 10:57:33.431351 master-2 kubenswrapper[4776]: I1011 10:57:33.431299 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 11 10:57:33.438664 master-2 kubenswrapper[4776]: I1011 10:57:33.438618 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Oct 11 10:57:33.440451 master-2 kubenswrapper[4776]: I1011 10:57:33.440423 4776 scope.go:117] "RemoveContainer" containerID="326e9fc4149cc880197755466f98e8b8e170fd384dd03ab05ef261cdaf3f4253" Oct 11 10:57:33.472846 master-2 kubenswrapper[4776]: I1011 10:57:33.470762 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 11 10:57:33.473280 master-2 kubenswrapper[4776]: E1011 10:57:33.473241 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerName="init" Oct 11 10:57:33.473280 master-2 kubenswrapper[4776]: I1011 10:57:33.473273 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerName="init" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: E1011 10:57:33.473290 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerName="dnsmasq-dns" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: I1011 10:57:33.473296 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerName="dnsmasq-dns" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: E1011 10:57:33.473313 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-listener" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: I1011 10:57:33.473320 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-listener" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: E1011 10:57:33.473338 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-api" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: I1011 10:57:33.473345 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-api" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: E1011 10:57:33.473353 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-evaluator" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: I1011 10:57:33.473359 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-evaluator" Oct 11 10:57:33.473384 master-2 kubenswrapper[4776]: E1011 10:57:33.473386 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-notifier" Oct 11 10:57:33.473763 master-2 kubenswrapper[4776]: I1011 10:57:33.473392 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-notifier" Oct 11 10:57:33.473763 master-2 kubenswrapper[4776]: I1011 10:57:33.473562 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-notifier" Oct 11 10:57:33.473763 master-2 kubenswrapper[4776]: I1011 10:57:33.473575 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-listener" Oct 11 10:57:33.473763 master-2 kubenswrapper[4776]: I1011 10:57:33.473583 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-evaluator" Oct 11 10:57:33.473763 master-2 kubenswrapper[4776]: I1011 10:57:33.473599 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" containerName="aodh-api" Oct 11 10:57:33.473763 master-2 kubenswrapper[4776]: I1011 10:57:33.473618 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" containerName="dnsmasq-dns" Oct 11 10:57:33.475890 master-2 kubenswrapper[4776]: I1011 10:57:33.475858 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 11 10:57:33.475985 master-2 kubenswrapper[4776]: I1011 10:57:33.475891 4776 scope.go:117] "RemoveContainer" containerID="e4ba08b3d6a3495897a11277b1fc244a6f1d3e3d1223b0873ad4182dea2a4b01" Oct 11 10:57:33.482982 master-2 kubenswrapper[4776]: I1011 10:57:33.482889 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 11 10:57:33.482982 master-2 kubenswrapper[4776]: I1011 10:57:33.482960 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Oct 11 10:57:33.483443 master-2 kubenswrapper[4776]: I1011 10:57:33.483414 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Oct 11 10:57:33.483907 master-2 kubenswrapper[4776]: I1011 10:57:33.483860 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 11 10:57:33.497362 master-2 kubenswrapper[4776]: I1011 10:57:33.497302 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 11 10:57:33.602894 master-2 kubenswrapper[4776]: I1011 10:57:33.602733 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-scripts\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.602894 master-2 kubenswrapper[4776]: I1011 10:57:33.602808 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqm7m\" (UniqueName: \"kubernetes.io/projected/3a33cf0a-51bc-4906-9c65-b043d38426a0-kube-api-access-qqm7m\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.603323 master-2 kubenswrapper[4776]: I1011 10:57:33.603201 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-public-tls-certs\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.603488 master-2 kubenswrapper[4776]: I1011 10:57:33.603432 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-config-data\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.603689 master-2 kubenswrapper[4776]: I1011 10:57:33.603643 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.604043 master-2 kubenswrapper[4776]: I1011 10:57:33.604003 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-internal-tls-certs\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.707565 master-2 kubenswrapper[4776]: I1011 10:57:33.707445 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-internal-tls-certs\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.707936 master-2 kubenswrapper[4776]: I1011 10:57:33.707613 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-scripts\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.707936 master-2 kubenswrapper[4776]: I1011 10:57:33.707643 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqm7m\" (UniqueName: \"kubernetes.io/projected/3a33cf0a-51bc-4906-9c65-b043d38426a0-kube-api-access-qqm7m\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.707936 master-2 kubenswrapper[4776]: I1011 10:57:33.707697 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-public-tls-certs\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.707936 master-2 kubenswrapper[4776]: I1011 10:57:33.707731 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-config-data\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.707936 master-2 kubenswrapper[4776]: I1011 10:57:33.707769 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.711656 master-2 kubenswrapper[4776]: I1011 10:57:33.711616 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.711930 master-2 kubenswrapper[4776]: I1011 10:57:33.711883 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-internal-tls-certs\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.711990 master-2 kubenswrapper[4776]: I1011 10:57:33.711913 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-public-tls-certs\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.712196 master-2 kubenswrapper[4776]: I1011 10:57:33.712172 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-scripts\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.713307 master-2 kubenswrapper[4776]: I1011 10:57:33.713246 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a33cf0a-51bc-4906-9c65-b043d38426a0-config-data\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.731616 master-2 kubenswrapper[4776]: I1011 10:57:33.731389 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqm7m\" (UniqueName: \"kubernetes.io/projected/3a33cf0a-51bc-4906-9c65-b043d38426a0-kube-api-access-qqm7m\") pod \"aodh-0\" (UID: \"3a33cf0a-51bc-4906-9c65-b043d38426a0\") " pod="openstack/aodh-0" Oct 11 10:57:33.812504 master-2 kubenswrapper[4776]: I1011 10:57:33.812409 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 11 10:57:34.072387 master-2 kubenswrapper[4776]: I1011 10:57:34.072335 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43ef2dc9-3563-4188-8d91-2fc18c396a4a" path="/var/lib/kubelet/pods/43ef2dc9-3563-4188-8d91-2fc18c396a4a/volumes" Oct 11 10:57:34.073110 master-2 kubenswrapper[4776]: I1011 10:57:34.073082 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea24dce-b1ed-4d57-bd23-f74edc2df1c3" path="/var/lib/kubelet/pods/7ea24dce-b1ed-4d57-bd23-f74edc2df1c3/volumes" Oct 11 10:57:34.245883 master-2 kubenswrapper[4776]: I1011 10:57:34.245830 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 11 10:57:34.247123 master-2 kubenswrapper[4776]: W1011 10:57:34.247078 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a33cf0a_51bc_4906_9c65_b043d38426a0.slice/crio-19661d60a7a0fd317aa5431eae2ad22091d0fcd48ad92b5160e4287af786920e WatchSource:0}: Error finding container 19661d60a7a0fd317aa5431eae2ad22091d0fcd48ad92b5160e4287af786920e: Status 404 returned error can't find the container with id 19661d60a7a0fd317aa5431eae2ad22091d0fcd48ad92b5160e4287af786920e Oct 11 10:57:34.345093 master-2 kubenswrapper[4776]: I1011 10:57:34.345032 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3a33cf0a-51bc-4906-9c65-b043d38426a0","Type":"ContainerStarted","Data":"19661d60a7a0fd317aa5431eae2ad22091d0fcd48ad92b5160e4287af786920e"} Oct 11 10:57:35.354356 master-2 kubenswrapper[4776]: I1011 10:57:35.354276 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3a33cf0a-51bc-4906-9c65-b043d38426a0","Type":"ContainerStarted","Data":"c11d21bdc93d76640ffec38d2311b1bb73741a7af0162d358c439a7fcb73f54d"} Oct 11 10:57:36.366453 master-2 kubenswrapper[4776]: I1011 10:57:36.366288 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3a33cf0a-51bc-4906-9c65-b043d38426a0","Type":"ContainerStarted","Data":"0ca0b1839df140e375b9d682f6202be40a80947d84dc2a6429db22f10a18baf6"} Oct 11 10:57:36.366453 master-2 kubenswrapper[4776]: I1011 10:57:36.366375 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3a33cf0a-51bc-4906-9c65-b043d38426a0","Type":"ContainerStarted","Data":"b4607e1e750b4ae0f2b91f175a40a1770e9e4c8c24782d97344e5983b46bec9a"} Oct 11 10:57:37.385505 master-2 kubenswrapper[4776]: I1011 10:57:37.385395 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3a33cf0a-51bc-4906-9c65-b043d38426a0","Type":"ContainerStarted","Data":"a17071c798c0cf44093889e34a7a24c7176010cd96550fb553cc6fc668d1566f"} Oct 11 10:57:37.441035 master-2 kubenswrapper[4776]: I1011 10:57:37.440861 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.8174366929999999 podStartE2EDuration="4.440824506s" podCreationTimestamp="2025-10-11 10:57:33 +0000 UTC" firstStartedPulling="2025-10-11 10:57:34.24947563 +0000 UTC m=+1889.033902329" lastFinishedPulling="2025-10-11 10:57:36.872863413 +0000 UTC m=+1891.657290142" observedRunningTime="2025-10-11 10:57:37.424530645 +0000 UTC m=+1892.208957394" watchObservedRunningTime="2025-10-11 10:57:37.440824506 +0000 UTC m=+1892.225251225" Oct 11 10:57:39.712781 master-2 kubenswrapper[4776]: I1011 10:57:39.712638 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 11 10:57:39.712781 master-2 kubenswrapper[4776]: I1011 10:57:39.712792 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 11 10:57:40.726943 master-2 kubenswrapper[4776]: I1011 10:57:40.726834 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="bb645f14-616d-425d-ae7d-5475565669f8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.0.179:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:40.727844 master-2 kubenswrapper[4776]: I1011 10:57:40.726927 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="bb645f14-616d-425d-ae7d-5475565669f8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.0.179:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:57:49.719738 master-2 kubenswrapper[4776]: I1011 10:57:49.719665 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-2" Oct 11 10:57:49.720390 master-2 kubenswrapper[4776]: I1011 10:57:49.719861 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-2" Oct 11 10:57:49.720390 master-2 kubenswrapper[4776]: I1011 10:57:49.720181 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-2" Oct 11 10:57:49.720390 master-2 kubenswrapper[4776]: I1011 10:57:49.720204 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-2" Oct 11 10:57:49.725004 master-2 kubenswrapper[4776]: I1011 10:57:49.724973 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-2" Oct 11 10:57:49.725208 master-2 kubenswrapper[4776]: I1011 10:57:49.725179 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-2" Oct 11 10:57:59.575926 master-2 kubenswrapper[4776]: I1011 10:57:59.575840 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:57:59.576583 master-2 kubenswrapper[4776]: I1011 10:57:59.576127 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="69f600e5-9321-41ce-9ec4-abee215f69fe" containerName="nova-scheduler-scheduler" containerID="cri-o://a1ea9dd039fb792b9b4194ff7606c452151d402cebba8363c5c12a6a1d82ba1c" gracePeriod=30 Oct 11 10:58:01.598522 master-2 kubenswrapper[4776]: I1011 10:58:01.598457 4776 generic.go:334] "Generic (PLEG): container finished" podID="69f600e5-9321-41ce-9ec4-abee215f69fe" containerID="a1ea9dd039fb792b9b4194ff7606c452151d402cebba8363c5c12a6a1d82ba1c" exitCode=0 Oct 11 10:58:01.598522 master-2 kubenswrapper[4776]: I1011 10:58:01.598510 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69f600e5-9321-41ce-9ec4-abee215f69fe","Type":"ContainerDied","Data":"a1ea9dd039fb792b9b4194ff7606c452151d402cebba8363c5c12a6a1d82ba1c"} Oct 11 10:58:01.746210 master-2 kubenswrapper[4776]: I1011 10:58:01.746172 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:58:01.901950 master-2 kubenswrapper[4776]: I1011 10:58:01.901790 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwj7l\" (UniqueName: \"kubernetes.io/projected/69f600e5-9321-41ce-9ec4-abee215f69fe-kube-api-access-mwj7l\") pod \"69f600e5-9321-41ce-9ec4-abee215f69fe\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " Oct 11 10:58:01.901950 master-2 kubenswrapper[4776]: I1011 10:58:01.901898 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-combined-ca-bundle\") pod \"69f600e5-9321-41ce-9ec4-abee215f69fe\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " Oct 11 10:58:01.902296 master-2 kubenswrapper[4776]: I1011 10:58:01.902064 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-config-data\") pod \"69f600e5-9321-41ce-9ec4-abee215f69fe\" (UID: \"69f600e5-9321-41ce-9ec4-abee215f69fe\") " Oct 11 10:58:01.904983 master-2 kubenswrapper[4776]: I1011 10:58:01.904920 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f600e5-9321-41ce-9ec4-abee215f69fe-kube-api-access-mwj7l" (OuterVolumeSpecName: "kube-api-access-mwj7l") pod "69f600e5-9321-41ce-9ec4-abee215f69fe" (UID: "69f600e5-9321-41ce-9ec4-abee215f69fe"). InnerVolumeSpecName "kube-api-access-mwj7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:58:01.951498 master-2 kubenswrapper[4776]: I1011 10:58:01.951439 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69f600e5-9321-41ce-9ec4-abee215f69fe" (UID: "69f600e5-9321-41ce-9ec4-abee215f69fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:58:01.952449 master-2 kubenswrapper[4776]: I1011 10:58:01.952407 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-config-data" (OuterVolumeSpecName: "config-data") pod "69f600e5-9321-41ce-9ec4-abee215f69fe" (UID: "69f600e5-9321-41ce-9ec4-abee215f69fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:58:02.004270 master-2 kubenswrapper[4776]: I1011 10:58:02.004224 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwj7l\" (UniqueName: \"kubernetes.io/projected/69f600e5-9321-41ce-9ec4-abee215f69fe-kube-api-access-mwj7l\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:02.004270 master-2 kubenswrapper[4776]: I1011 10:58:02.004260 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:02.004270 master-2 kubenswrapper[4776]: I1011 10:58:02.004271 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f600e5-9321-41ce-9ec4-abee215f69fe-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:02.611005 master-2 kubenswrapper[4776]: I1011 10:58:02.610951 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69f600e5-9321-41ce-9ec4-abee215f69fe","Type":"ContainerDied","Data":"bb574ec91c838b585beb61a822a5e96518597aaa45bd1ce4b4d86481bf2e5fb7"} Oct 11 10:58:02.611538 master-2 kubenswrapper[4776]: I1011 10:58:02.611017 4776 scope.go:117] "RemoveContainer" containerID="a1ea9dd039fb792b9b4194ff7606c452151d402cebba8363c5c12a6a1d82ba1c" Oct 11 10:58:02.611538 master-2 kubenswrapper[4776]: I1011 10:58:02.611066 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:58:02.645649 master-2 kubenswrapper[4776]: I1011 10:58:02.645196 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:58:02.655478 master-2 kubenswrapper[4776]: I1011 10:58:02.655418 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:58:02.681918 master-2 kubenswrapper[4776]: I1011 10:58:02.681851 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:58:02.682308 master-2 kubenswrapper[4776]: E1011 10:58:02.682263 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f600e5-9321-41ce-9ec4-abee215f69fe" containerName="nova-scheduler-scheduler" Oct 11 10:58:02.682308 master-2 kubenswrapper[4776]: I1011 10:58:02.682279 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f600e5-9321-41ce-9ec4-abee215f69fe" containerName="nova-scheduler-scheduler" Oct 11 10:58:02.683648 master-2 kubenswrapper[4776]: I1011 10:58:02.682501 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f600e5-9321-41ce-9ec4-abee215f69fe" containerName="nova-scheduler-scheduler" Oct 11 10:58:02.683648 master-2 kubenswrapper[4776]: I1011 10:58:02.683268 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:58:02.686711 master-2 kubenswrapper[4776]: I1011 10:58:02.686500 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 10:58:02.716770 master-2 kubenswrapper[4776]: I1011 10:58:02.707085 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:58:02.819643 master-2 kubenswrapper[4776]: I1011 10:58:02.819518 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b694ecc8-10eb-40a7-8c2c-a622bd70f775-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.819915 master-2 kubenswrapper[4776]: I1011 10:58:02.819665 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b694ecc8-10eb-40a7-8c2c-a622bd70f775-config-data\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.819969 master-2 kubenswrapper[4776]: I1011 10:58:02.819908 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2b2b\" (UniqueName: \"kubernetes.io/projected/b694ecc8-10eb-40a7-8c2c-a622bd70f775-kube-api-access-t2b2b\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.922082 master-2 kubenswrapper[4776]: I1011 10:58:02.922014 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b694ecc8-10eb-40a7-8c2c-a622bd70f775-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.922327 master-2 kubenswrapper[4776]: I1011 10:58:02.922093 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b694ecc8-10eb-40a7-8c2c-a622bd70f775-config-data\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.922327 master-2 kubenswrapper[4776]: I1011 10:58:02.922200 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2b2b\" (UniqueName: \"kubernetes.io/projected/b694ecc8-10eb-40a7-8c2c-a622bd70f775-kube-api-access-t2b2b\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.925531 master-2 kubenswrapper[4776]: I1011 10:58:02.925457 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b694ecc8-10eb-40a7-8c2c-a622bd70f775-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.926742 master-2 kubenswrapper[4776]: I1011 10:58:02.926014 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b694ecc8-10eb-40a7-8c2c-a622bd70f775-config-data\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:02.941639 master-2 kubenswrapper[4776]: I1011 10:58:02.941594 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2b2b\" (UniqueName: \"kubernetes.io/projected/b694ecc8-10eb-40a7-8c2c-a622bd70f775-kube-api-access-t2b2b\") pod \"nova-scheduler-0\" (UID: \"b694ecc8-10eb-40a7-8c2c-a622bd70f775\") " pod="openstack/nova-scheduler-0" Oct 11 10:58:03.001276 master-2 kubenswrapper[4776]: I1011 10:58:03.001181 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 10:58:03.411862 master-2 kubenswrapper[4776]: I1011 10:58:03.411826 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 10:58:03.413882 master-2 kubenswrapper[4776]: W1011 10:58:03.413842 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb694ecc8_10eb_40a7_8c2c_a622bd70f775.slice/crio-ad2275d72e32e65141b1ea1581cc0f574a8785c261c95f7a79614683fe1f5b45 WatchSource:0}: Error finding container ad2275d72e32e65141b1ea1581cc0f574a8785c261c95f7a79614683fe1f5b45: Status 404 returned error can't find the container with id ad2275d72e32e65141b1ea1581cc0f574a8785c261c95f7a79614683fe1f5b45 Oct 11 10:58:03.624835 master-2 kubenswrapper[4776]: I1011 10:58:03.623981 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b694ecc8-10eb-40a7-8c2c-a622bd70f775","Type":"ContainerStarted","Data":"86091f5b4e67c173f5c44ec620cd7651b50051b6424651f2d1ee6cc1ed5a40f6"} Oct 11 10:58:03.624835 master-2 kubenswrapper[4776]: I1011 10:58:03.624046 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b694ecc8-10eb-40a7-8c2c-a622bd70f775","Type":"ContainerStarted","Data":"ad2275d72e32e65141b1ea1581cc0f574a8785c261c95f7a79614683fe1f5b45"} Oct 11 10:58:03.650125 master-2 kubenswrapper[4776]: I1011 10:58:03.650044 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.650020517 podStartE2EDuration="1.650020517s" podCreationTimestamp="2025-10-11 10:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:58:03.646216884 +0000 UTC m=+1918.430643593" watchObservedRunningTime="2025-10-11 10:58:03.650020517 +0000 UTC m=+1918.434447226" Oct 11 10:58:04.070012 master-2 kubenswrapper[4776]: I1011 10:58:04.069942 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f600e5-9321-41ce-9ec4-abee215f69fe" path="/var/lib/kubelet/pods/69f600e5-9321-41ce-9ec4-abee215f69fe/volumes" Oct 11 10:58:08.001778 master-2 kubenswrapper[4776]: I1011 10:58:08.001692 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 11 10:58:13.001926 master-2 kubenswrapper[4776]: I1011 10:58:13.001831 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 11 10:58:13.032121 master-2 kubenswrapper[4776]: I1011 10:58:13.032039 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 11 10:58:13.772914 master-2 kubenswrapper[4776]: I1011 10:58:13.772827 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 11 10:58:17.735081 master-2 kubenswrapper[4776]: I1011 10:58:17.734990 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:58:17.736455 master-2 kubenswrapper[4776]: I1011 10:58:17.735394 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-log" containerID="cri-o://a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12" gracePeriod=30 Oct 11 10:58:17.736455 master-2 kubenswrapper[4776]: I1011 10:58:17.735565 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-metadata" containerID="cri-o://a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77" gracePeriod=30 Oct 11 10:58:18.779546 master-2 kubenswrapper[4776]: I1011 10:58:18.779479 4776 generic.go:334] "Generic (PLEG): container finished" podID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerID="a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12" exitCode=143 Oct 11 10:58:18.779546 master-2 kubenswrapper[4776]: I1011 10:58:18.779530 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ec11821-cdf5-45e1-a138-2b62dad57cc3","Type":"ContainerDied","Data":"a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12"} Oct 11 10:58:20.881911 master-2 kubenswrapper[4776]: I1011 10:58:20.881838 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.128.0.172:8775/\": read tcp 10.128.0.2:38124->10.128.0.172:8775: read: connection reset by peer" Oct 11 10:58:20.882845 master-2 kubenswrapper[4776]: I1011 10:58:20.881865 4776 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.128.0.172:8775/\": read tcp 10.128.0.2:38140->10.128.0.172:8775: read: connection reset by peer" Oct 11 10:58:21.676965 master-2 kubenswrapper[4776]: I1011 10:58:21.676927 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 10:58:21.806348 master-2 kubenswrapper[4776]: I1011 10:58:21.806284 4776 generic.go:334] "Generic (PLEG): container finished" podID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerID="a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77" exitCode=0 Oct 11 10:58:21.806348 master-2 kubenswrapper[4776]: I1011 10:58:21.806329 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ec11821-cdf5-45e1-a138-2b62dad57cc3","Type":"ContainerDied","Data":"a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77"} Oct 11 10:58:21.806348 master-2 kubenswrapper[4776]: I1011 10:58:21.806336 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 10:58:21.806663 master-2 kubenswrapper[4776]: I1011 10:58:21.806365 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ec11821-cdf5-45e1-a138-2b62dad57cc3","Type":"ContainerDied","Data":"9b37b95095e4c99e4c588f15858480444284fe16e29197075e74b845d5fdd23b"} Oct 11 10:58:21.806663 master-2 kubenswrapper[4776]: I1011 10:58:21.806387 4776 scope.go:117] "RemoveContainer" containerID="a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77" Oct 11 10:58:21.810640 master-2 kubenswrapper[4776]: I1011 10:58:21.810619 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec11821-cdf5-45e1-a138-2b62dad57cc3-logs\") pod \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " Oct 11 10:58:21.810767 master-2 kubenswrapper[4776]: I1011 10:58:21.810747 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75nct\" (UniqueName: \"kubernetes.io/projected/2ec11821-cdf5-45e1-a138-2b62dad57cc3-kube-api-access-75nct\") pod \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " Oct 11 10:58:21.811786 master-2 kubenswrapper[4776]: I1011 10:58:21.810877 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-config-data\") pod \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " Oct 11 10:58:21.811786 master-2 kubenswrapper[4776]: I1011 10:58:21.810918 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-combined-ca-bundle\") pod \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\" (UID: \"2ec11821-cdf5-45e1-a138-2b62dad57cc3\") " Oct 11 10:58:21.811786 master-2 kubenswrapper[4776]: I1011 10:58:21.811005 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ec11821-cdf5-45e1-a138-2b62dad57cc3-logs" (OuterVolumeSpecName: "logs") pod "2ec11821-cdf5-45e1-a138-2b62dad57cc3" (UID: "2ec11821-cdf5-45e1-a138-2b62dad57cc3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:58:21.815588 master-2 kubenswrapper[4776]: I1011 10:58:21.815547 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec11821-cdf5-45e1-a138-2b62dad57cc3-kube-api-access-75nct" (OuterVolumeSpecName: "kube-api-access-75nct") pod "2ec11821-cdf5-45e1-a138-2b62dad57cc3" (UID: "2ec11821-cdf5-45e1-a138-2b62dad57cc3"). InnerVolumeSpecName "kube-api-access-75nct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:58:21.816022 master-2 kubenswrapper[4776]: I1011 10:58:21.816003 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75nct\" (UniqueName: \"kubernetes.io/projected/2ec11821-cdf5-45e1-a138-2b62dad57cc3-kube-api-access-75nct\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:21.816069 master-2 kubenswrapper[4776]: I1011 10:58:21.816027 4776 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec11821-cdf5-45e1-a138-2b62dad57cc3-logs\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:21.834393 master-2 kubenswrapper[4776]: I1011 10:58:21.834345 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ec11821-cdf5-45e1-a138-2b62dad57cc3" (UID: "2ec11821-cdf5-45e1-a138-2b62dad57cc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:58:21.850014 master-2 kubenswrapper[4776]: I1011 10:58:21.849950 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-config-data" (OuterVolumeSpecName: "config-data") pod "2ec11821-cdf5-45e1-a138-2b62dad57cc3" (UID: "2ec11821-cdf5-45e1-a138-2b62dad57cc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:58:21.906392 master-2 kubenswrapper[4776]: I1011 10:58:21.906352 4776 scope.go:117] "RemoveContainer" containerID="a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12" Oct 11 10:58:21.918175 master-2 kubenswrapper[4776]: I1011 10:58:21.918135 4776 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-config-data\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:21.918175 master-2 kubenswrapper[4776]: I1011 10:58:21.918163 4776 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec11821-cdf5-45e1-a138-2b62dad57cc3-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 10:58:21.926399 master-2 kubenswrapper[4776]: I1011 10:58:21.926114 4776 scope.go:117] "RemoveContainer" containerID="a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77" Oct 11 10:58:21.926506 master-2 kubenswrapper[4776]: E1011 10:58:21.926407 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77\": container with ID starting with a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77 not found: ID does not exist" containerID="a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77" Oct 11 10:58:21.926506 master-2 kubenswrapper[4776]: I1011 10:58:21.926430 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77"} err="failed to get container status \"a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77\": rpc error: code = NotFound desc = could not find container \"a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77\": container with ID starting with a18584b955bb2b8123c57441c63a4e2efa51a0f973ad69891c4c72571a533d77 not found: ID does not exist" Oct 11 10:58:21.926506 master-2 kubenswrapper[4776]: I1011 10:58:21.926450 4776 scope.go:117] "RemoveContainer" containerID="a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12" Oct 11 10:58:21.926741 master-2 kubenswrapper[4776]: E1011 10:58:21.926723 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12\": container with ID starting with a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12 not found: ID does not exist" containerID="a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12" Oct 11 10:58:21.926813 master-2 kubenswrapper[4776]: I1011 10:58:21.926739 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12"} err="failed to get container status \"a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12\": rpc error: code = NotFound desc = could not find container \"a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12\": container with ID starting with a65ec02005e846f235af58778a72a1f0884d62dbbca3a22971419a06bcdbff12 not found: ID does not exist" Oct 11 10:58:22.137409 master-2 kubenswrapper[4776]: I1011 10:58:22.137290 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:58:22.146058 master-2 kubenswrapper[4776]: I1011 10:58:22.146001 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:58:22.177192 master-2 kubenswrapper[4776]: I1011 10:58:22.177102 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:58:22.177552 master-2 kubenswrapper[4776]: E1011 10:58:22.177502 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-metadata" Oct 11 10:58:22.177552 master-2 kubenswrapper[4776]: I1011 10:58:22.177525 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-metadata" Oct 11 10:58:22.177552 master-2 kubenswrapper[4776]: E1011 10:58:22.177545 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-log" Oct 11 10:58:22.177552 master-2 kubenswrapper[4776]: I1011 10:58:22.177553 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-log" Oct 11 10:58:22.177857 master-2 kubenswrapper[4776]: I1011 10:58:22.177806 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-metadata" Oct 11 10:58:22.177857 master-2 kubenswrapper[4776]: I1011 10:58:22.177838 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" containerName="nova-metadata-log" Oct 11 10:58:22.178815 master-2 kubenswrapper[4776]: I1011 10:58:22.178779 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 10:58:22.181271 master-2 kubenswrapper[4776]: I1011 10:58:22.181087 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 11 10:58:22.181944 master-2 kubenswrapper[4776]: I1011 10:58:22.181878 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 10:58:22.197323 master-2 kubenswrapper[4776]: I1011 10:58:22.197259 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:58:22.330874 master-2 kubenswrapper[4776]: I1011 10:58:22.330777 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.330874 master-2 kubenswrapper[4776]: I1011 10:58:22.330860 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62606489-fbdc-4c5f-8cad-744bdba9716c-logs\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.331167 master-2 kubenswrapper[4776]: I1011 10:58:22.331110 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.331205 master-2 kubenswrapper[4776]: I1011 10:58:22.331182 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-config-data\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.331311 master-2 kubenswrapper[4776]: I1011 10:58:22.331274 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmdt8\" (UniqueName: \"kubernetes.io/projected/62606489-fbdc-4c5f-8cad-744bdba9716c-kube-api-access-vmdt8\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.434126 master-2 kubenswrapper[4776]: I1011 10:58:22.434058 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.434126 master-2 kubenswrapper[4776]: I1011 10:58:22.434113 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62606489-fbdc-4c5f-8cad-744bdba9716c-logs\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.434448 master-2 kubenswrapper[4776]: I1011 10:58:22.434182 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.434448 master-2 kubenswrapper[4776]: I1011 10:58:22.434210 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-config-data\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.434448 master-2 kubenswrapper[4776]: I1011 10:58:22.434245 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmdt8\" (UniqueName: \"kubernetes.io/projected/62606489-fbdc-4c5f-8cad-744bdba9716c-kube-api-access-vmdt8\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.434958 master-2 kubenswrapper[4776]: I1011 10:58:22.434610 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62606489-fbdc-4c5f-8cad-744bdba9716c-logs\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.437704 master-2 kubenswrapper[4776]: I1011 10:58:22.437643 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-config-data\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.438608 master-2 kubenswrapper[4776]: I1011 10:58:22.438552 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.439738 master-2 kubenswrapper[4776]: I1011 10:58:22.439704 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62606489-fbdc-4c5f-8cad-744bdba9716c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.453006 master-2 kubenswrapper[4776]: I1011 10:58:22.452972 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmdt8\" (UniqueName: \"kubernetes.io/projected/62606489-fbdc-4c5f-8cad-744bdba9716c-kube-api-access-vmdt8\") pod \"nova-metadata-0\" (UID: \"62606489-fbdc-4c5f-8cad-744bdba9716c\") " pod="openstack/nova-metadata-0" Oct 11 10:58:22.519485 master-2 kubenswrapper[4776]: I1011 10:58:22.519415 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 10:58:22.965093 master-2 kubenswrapper[4776]: I1011 10:58:22.965034 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 10:58:23.829086 master-2 kubenswrapper[4776]: I1011 10:58:23.829003 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62606489-fbdc-4c5f-8cad-744bdba9716c","Type":"ContainerStarted","Data":"ee0892194461a7d514a5c0d355b0ae1f0ad4a1c2f97e3ca26937b5540ddf1137"} Oct 11 10:58:23.829086 master-2 kubenswrapper[4776]: I1011 10:58:23.829059 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62606489-fbdc-4c5f-8cad-744bdba9716c","Type":"ContainerStarted","Data":"3b305e1064b6f89b22836debeb11b4ad43a606968326f3a60d61a16c65817764"} Oct 11 10:58:23.829086 master-2 kubenswrapper[4776]: I1011 10:58:23.829074 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62606489-fbdc-4c5f-8cad-744bdba9716c","Type":"ContainerStarted","Data":"d3b026a74a2b7918e6e2879e2e3f656198028459d1ed0cab385e8a0df166070f"} Oct 11 10:58:23.867667 master-2 kubenswrapper[4776]: I1011 10:58:23.867561 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.867533287 podStartE2EDuration="1.867533287s" podCreationTimestamp="2025-10-11 10:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:58:23.862232953 +0000 UTC m=+1938.646659682" watchObservedRunningTime="2025-10-11 10:58:23.867533287 +0000 UTC m=+1938.651960006" Oct 11 10:58:24.067713 master-2 kubenswrapper[4776]: I1011 10:58:24.067560 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ec11821-cdf5-45e1-a138-2b62dad57cc3" path="/var/lib/kubelet/pods/2ec11821-cdf5-45e1-a138-2b62dad57cc3/volumes" Oct 11 10:58:27.519694 master-2 kubenswrapper[4776]: I1011 10:58:27.519625 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 10:58:27.520344 master-2 kubenswrapper[4776]: I1011 10:58:27.519956 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 10:58:32.519936 master-2 kubenswrapper[4776]: I1011 10:58:32.519751 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 11 10:58:32.519936 master-2 kubenswrapper[4776]: I1011 10:58:32.519919 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 11 10:58:33.540999 master-2 kubenswrapper[4776]: I1011 10:58:33.540928 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="62606489-fbdc-4c5f-8cad-744bdba9716c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:58:33.541783 master-2 kubenswrapper[4776]: I1011 10:58:33.540931 4776 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="62606489-fbdc-4c5f-8cad-744bdba9716c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 10:58:42.527405 master-2 kubenswrapper[4776]: I1011 10:58:42.527175 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 11 10:58:42.528100 master-2 kubenswrapper[4776]: I1011 10:58:42.527987 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 11 10:58:42.538663 master-2 kubenswrapper[4776]: I1011 10:58:42.538609 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 11 10:58:43.006081 master-2 kubenswrapper[4776]: I1011 10:58:43.006015 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 11 11:00:17.067613 master-2 kubenswrapper[4776]: I1011 11:00:17.067555 4776 scope.go:117] "RemoveContainer" containerID="4c31e081cf778b32f0c186718d90bb335248738e9e0b592d92f2f4e35cfd127e" Oct 11 11:00:17.091689 master-2 kubenswrapper[4776]: I1011 11:00:17.091589 4776 scope.go:117] "RemoveContainer" containerID="33f6c7d3d49df3aa00341fc88f0a2426a5e9fdd61368c7621163e1d8e6740b52" Oct 11 11:01:17.180213 master-2 kubenswrapper[4776]: I1011 11:01:17.180109 4776 scope.go:117] "RemoveContainer" containerID="1ef6065ef3373ebd1d031e48bef6566526ba170bc1411a15757ea04bbf481260" Oct 11 11:03:03.620618 master-2 kubenswrapper[4776]: I1011 11:03:03.620550 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-22xqd"] Oct 11 11:03:03.622167 master-2 kubenswrapper[4776]: I1011 11:03:03.622142 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.624527 master-2 kubenswrapper[4776]: I1011 11:03:03.624476 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Oct 11 11:03:03.624527 master-2 kubenswrapper[4776]: I1011 11:03:03.624493 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Oct 11 11:03:03.624815 master-2 kubenswrapper[4776]: I1011 11:03:03.624788 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Oct 11 11:03:03.645813 master-2 kubenswrapper[4776]: I1011 11:03:03.645745 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-22xqd"] Oct 11 11:03:03.719954 master-2 kubenswrapper[4776]: I1011 11:03:03.719864 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-config-data-merged\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.720176 master-2 kubenswrapper[4776]: I1011 11:03:03.720053 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-scripts\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.720386 master-2 kubenswrapper[4776]: I1011 11:03:03.720349 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-hm-ports\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.720432 master-2 kubenswrapper[4776]: I1011 11:03:03.720393 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-config-data\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.822013 master-2 kubenswrapper[4776]: I1011 11:03:03.821777 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-hm-ports\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.822013 master-2 kubenswrapper[4776]: I1011 11:03:03.822003 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-config-data\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.822280 master-2 kubenswrapper[4776]: I1011 11:03:03.822078 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-config-data-merged\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.822280 master-2 kubenswrapper[4776]: I1011 11:03:03.822126 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-scripts\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.822588 master-2 kubenswrapper[4776]: I1011 11:03:03.822552 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-config-data-merged\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.823305 master-2 kubenswrapper[4776]: I1011 11:03:03.823277 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-hm-ports\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.825491 master-2 kubenswrapper[4776]: I1011 11:03:03.825459 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-scripts\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.826247 master-2 kubenswrapper[4776]: I1011 11:03:03.826189 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5127d226-d4e2-41f4-8a2b-eaf2f7707f12-config-data\") pod \"octavia-rsyslog-22xqd\" (UID: \"5127d226-d4e2-41f4-8a2b-eaf2f7707f12\") " pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:03.939062 master-2 kubenswrapper[4776]: I1011 11:03:03.939005 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:05.665789 master-2 kubenswrapper[4776]: I1011 11:03:05.665735 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-22xqd"] Oct 11 11:03:05.682956 master-2 kubenswrapper[4776]: W1011 11:03:05.682874 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5127d226_d4e2_41f4_8a2b_eaf2f7707f12.slice/crio-086d8b87f3256b526bae1a95dfbf12f82e90f39f919d3f33a0e3e0656998885b WatchSource:0}: Error finding container 086d8b87f3256b526bae1a95dfbf12f82e90f39f919d3f33a0e3e0656998885b: Status 404 returned error can't find the container with id 086d8b87f3256b526bae1a95dfbf12f82e90f39f919d3f33a0e3e0656998885b Oct 11 11:03:05.692738 master-2 kubenswrapper[4776]: I1011 11:03:05.689203 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:03:06.567779 master-2 kubenswrapper[4776]: I1011 11:03:06.567609 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-22xqd" event={"ID":"5127d226-d4e2-41f4-8a2b-eaf2f7707f12","Type":"ContainerStarted","Data":"086d8b87f3256b526bae1a95dfbf12f82e90f39f919d3f33a0e3e0656998885b"} Oct 11 11:03:12.621289 master-2 kubenswrapper[4776]: I1011 11:03:12.621134 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-22xqd" event={"ID":"5127d226-d4e2-41f4-8a2b-eaf2f7707f12","Type":"ContainerStarted","Data":"659e8f92c23e371eff8940610003062936d4fd419ce58aaceb8d267acae7ec23"} Oct 11 11:03:14.638764 master-2 kubenswrapper[4776]: I1011 11:03:14.638525 4776 generic.go:334] "Generic (PLEG): container finished" podID="5127d226-d4e2-41f4-8a2b-eaf2f7707f12" containerID="659e8f92c23e371eff8940610003062936d4fd419ce58aaceb8d267acae7ec23" exitCode=0 Oct 11 11:03:14.638764 master-2 kubenswrapper[4776]: I1011 11:03:14.638575 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-22xqd" event={"ID":"5127d226-d4e2-41f4-8a2b-eaf2f7707f12","Type":"ContainerDied","Data":"659e8f92c23e371eff8940610003062936d4fd419ce58aaceb8d267acae7ec23"} Oct 11 11:03:16.661573 master-2 kubenswrapper[4776]: I1011 11:03:16.661507 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-22xqd" event={"ID":"5127d226-d4e2-41f4-8a2b-eaf2f7707f12","Type":"ContainerStarted","Data":"61363a536dd6f5a57138ce9ec0fda0b52a0e897985be03d22f6159df79753737"} Oct 11 11:03:16.662514 master-2 kubenswrapper[4776]: I1011 11:03:16.661731 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:16.694649 master-2 kubenswrapper[4776]: I1011 11:03:16.694544 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-22xqd" podStartSLOduration=3.816206545 podStartE2EDuration="13.694525937s" podCreationTimestamp="2025-10-11 11:03:03 +0000 UTC" firstStartedPulling="2025-10-11 11:03:05.689124768 +0000 UTC m=+2220.473551477" lastFinishedPulling="2025-10-11 11:03:15.56744416 +0000 UTC m=+2230.351870869" observedRunningTime="2025-10-11 11:03:16.691358602 +0000 UTC m=+2231.475785311" watchObservedRunningTime="2025-10-11 11:03:16.694525937 +0000 UTC m=+2231.478952646" Oct 11 11:03:33.967796 master-2 kubenswrapper[4776]: I1011 11:03:33.967638 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-22xqd" Oct 11 11:03:43.055371 master-2 kubenswrapper[4776]: I1011 11:03:43.054752 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-v9tlh"] Oct 11 11:03:43.063277 master-2 kubenswrapper[4776]: I1011 11:03:43.063216 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-v9tlh"] Oct 11 11:03:44.041915 master-2 kubenswrapper[4776]: I1011 11:03:44.041854 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-d7297"] Oct 11 11:03:44.049743 master-2 kubenswrapper[4776]: I1011 11:03:44.049697 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-d7297"] Oct 11 11:03:44.071653 master-2 kubenswrapper[4776]: I1011 11:03:44.070836 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a7cb456-8a0b-4e56-9dc5-93b488813f77" path="/var/lib/kubelet/pods/1a7cb456-8a0b-4e56-9dc5-93b488813f77/volumes" Oct 11 11:03:44.071653 master-2 kubenswrapper[4776]: I1011 11:03:44.071531 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829885e7-9e39-447e-a4f0-2ac128443d04" path="/var/lib/kubelet/pods/829885e7-9e39-447e-a4f0-2ac128443d04/volumes" Oct 11 11:04:08.788586 master-2 kubenswrapper[4776]: I1011 11:04:08.785145 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-4h92c"] Oct 11 11:04:08.789217 master-2 kubenswrapper[4776]: I1011 11:04:08.788971 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.793284 master-2 kubenswrapper[4776]: I1011 11:04:08.793246 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Oct 11 11:04:08.794460 master-2 kubenswrapper[4776]: I1011 11:04:08.794417 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Oct 11 11:04:08.796481 master-2 kubenswrapper[4776]: I1011 11:04:08.796397 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Oct 11 11:04:08.812549 master-2 kubenswrapper[4776]: I1011 11:04:08.812488 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-4h92c"] Oct 11 11:04:08.852915 master-2 kubenswrapper[4776]: I1011 11:04:08.852849 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-amphora-certs\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.853288 master-2 kubenswrapper[4776]: I1011 11:04:08.852953 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a4c01261-6836-4eb3-9dca-826e486273ec-config-data-merged\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.853288 master-2 kubenswrapper[4776]: I1011 11:04:08.853000 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-combined-ca-bundle\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.853288 master-2 kubenswrapper[4776]: I1011 11:04:08.853062 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a4c01261-6836-4eb3-9dca-826e486273ec-hm-ports\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.853288 master-2 kubenswrapper[4776]: I1011 11:04:08.853087 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-config-data\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.853288 master-2 kubenswrapper[4776]: I1011 11:04:08.853153 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-scripts\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.955149 master-2 kubenswrapper[4776]: I1011 11:04:08.955081 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-config-data\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.955373 master-2 kubenswrapper[4776]: I1011 11:04:08.955199 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-scripts\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.955489 master-2 kubenswrapper[4776]: I1011 11:04:08.955454 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-amphora-certs\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.955542 master-2 kubenswrapper[4776]: I1011 11:04:08.955525 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a4c01261-6836-4eb3-9dca-826e486273ec-config-data-merged\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.955592 master-2 kubenswrapper[4776]: I1011 11:04:08.955572 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-combined-ca-bundle\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.955662 master-2 kubenswrapper[4776]: I1011 11:04:08.955642 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a4c01261-6836-4eb3-9dca-826e486273ec-hm-ports\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.956331 master-2 kubenswrapper[4776]: I1011 11:04:08.956277 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a4c01261-6836-4eb3-9dca-826e486273ec-config-data-merged\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.958611 master-2 kubenswrapper[4776]: I1011 11:04:08.957503 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a4c01261-6836-4eb3-9dca-826e486273ec-hm-ports\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.960472 master-2 kubenswrapper[4776]: I1011 11:04:08.960348 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-scripts\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.960622 master-2 kubenswrapper[4776]: I1011 11:04:08.960584 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-amphora-certs\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.961727 master-2 kubenswrapper[4776]: I1011 11:04:08.961264 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-config-data\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:08.961727 master-2 kubenswrapper[4776]: I1011 11:04:08.961630 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c01261-6836-4eb3-9dca-826e486273ec-combined-ca-bundle\") pod \"octavia-healthmanager-4h92c\" (UID: \"a4c01261-6836-4eb3-9dca-826e486273ec\") " pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:09.104713 master-2 kubenswrapper[4776]: I1011 11:04:09.104557 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:09.693123 master-2 kubenswrapper[4776]: I1011 11:04:09.693052 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-4h92c"] Oct 11 11:04:10.173815 master-2 kubenswrapper[4776]: I1011 11:04:10.173737 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4h92c" event={"ID":"a4c01261-6836-4eb3-9dca-826e486273ec","Type":"ContainerStarted","Data":"e0da58750383ace39a89ddfe7e45a633fc74849fe1c0d62dadbe6c821a5735d7"} Oct 11 11:04:11.022109 master-2 kubenswrapper[4776]: I1011 11:04:11.022036 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-dsjdj"] Oct 11 11:04:11.023887 master-2 kubenswrapper[4776]: I1011 11:04:11.023853 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.026788 master-2 kubenswrapper[4776]: I1011 11:04:11.026733 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Oct 11 11:04:11.026919 master-2 kubenswrapper[4776]: I1011 11:04:11.026856 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Oct 11 11:04:11.051756 master-2 kubenswrapper[4776]: I1011 11:04:11.051640 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-dsjdj"] Oct 11 11:04:11.098878 master-2 kubenswrapper[4776]: I1011 11:04:11.098826 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/b714c56b-9901-470b-ba8d-790c638ddd43-hm-ports\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.099110 master-2 kubenswrapper[4776]: I1011 11:04:11.098959 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-config-data\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.099110 master-2 kubenswrapper[4776]: I1011 11:04:11.099029 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b714c56b-9901-470b-ba8d-790c638ddd43-config-data-merged\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.099226 master-2 kubenswrapper[4776]: I1011 11:04:11.099118 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-combined-ca-bundle\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.099226 master-2 kubenswrapper[4776]: I1011 11:04:11.099155 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-scripts\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.099226 master-2 kubenswrapper[4776]: I1011 11:04:11.099195 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-amphora-certs\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.188860 master-2 kubenswrapper[4776]: I1011 11:04:11.188815 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4h92c" event={"ID":"a4c01261-6836-4eb3-9dca-826e486273ec","Type":"ContainerStarted","Data":"6d7fc05fe475863b3968a1652d1f8e0da4ae7bb2a768ecaa2bf9f2f3fa395701"} Oct 11 11:04:11.201690 master-2 kubenswrapper[4776]: I1011 11:04:11.201592 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-amphora-certs\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.201926 master-2 kubenswrapper[4776]: I1011 11:04:11.201754 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/b714c56b-9901-470b-ba8d-790c638ddd43-hm-ports\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.201926 master-2 kubenswrapper[4776]: I1011 11:04:11.201869 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-config-data\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.202023 master-2 kubenswrapper[4776]: I1011 11:04:11.201958 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b714c56b-9901-470b-ba8d-790c638ddd43-config-data-merged\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.202060 master-2 kubenswrapper[4776]: I1011 11:04:11.202042 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-combined-ca-bundle\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.202145 master-2 kubenswrapper[4776]: I1011 11:04:11.202072 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-scripts\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.202880 master-2 kubenswrapper[4776]: I1011 11:04:11.202831 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b714c56b-9901-470b-ba8d-790c638ddd43-config-data-merged\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.204616 master-2 kubenswrapper[4776]: I1011 11:04:11.204547 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/b714c56b-9901-470b-ba8d-790c638ddd43-hm-ports\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.207411 master-2 kubenswrapper[4776]: I1011 11:04:11.206885 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-combined-ca-bundle\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.207411 master-2 kubenswrapper[4776]: I1011 11:04:11.207368 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-config-data\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.207520 master-2 kubenswrapper[4776]: I1011 11:04:11.207345 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-scripts\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.236833 master-2 kubenswrapper[4776]: I1011 11:04:11.236715 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/b714c56b-9901-470b-ba8d-790c638ddd43-amphora-certs\") pod \"octavia-housekeeping-dsjdj\" (UID: \"b714c56b-9901-470b-ba8d-790c638ddd43\") " pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.360787 master-2 kubenswrapper[4776]: I1011 11:04:11.340525 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:11.928195 master-2 kubenswrapper[4776]: I1011 11:04:11.928149 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-dsjdj"] Oct 11 11:04:11.928562 master-2 kubenswrapper[4776]: W1011 11:04:11.928511 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb714c56b_9901_470b_ba8d_790c638ddd43.slice/crio-2dd77c240f26748f3aa8f9054cf07fa045c2d4f5a0c2a851b6cd723b0f2608b7 WatchSource:0}: Error finding container 2dd77c240f26748f3aa8f9054cf07fa045c2d4f5a0c2a851b6cd723b0f2608b7: Status 404 returned error can't find the container with id 2dd77c240f26748f3aa8f9054cf07fa045c2d4f5a0c2a851b6cd723b0f2608b7 Oct 11 11:04:12.216689 master-2 kubenswrapper[4776]: I1011 11:04:12.216554 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-dsjdj" event={"ID":"b714c56b-9901-470b-ba8d-790c638ddd43","Type":"ContainerStarted","Data":"2dd77c240f26748f3aa8f9054cf07fa045c2d4f5a0c2a851b6cd723b0f2608b7"} Oct 11 11:04:12.219586 master-2 kubenswrapper[4776]: I1011 11:04:12.219530 4776 generic.go:334] "Generic (PLEG): container finished" podID="a4c01261-6836-4eb3-9dca-826e486273ec" containerID="6d7fc05fe475863b3968a1652d1f8e0da4ae7bb2a768ecaa2bf9f2f3fa395701" exitCode=0 Oct 11 11:04:12.219661 master-2 kubenswrapper[4776]: I1011 11:04:12.219582 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4h92c" event={"ID":"a4c01261-6836-4eb3-9dca-826e486273ec","Type":"ContainerDied","Data":"6d7fc05fe475863b3968a1652d1f8e0da4ae7bb2a768ecaa2bf9f2f3fa395701"} Oct 11 11:04:12.830970 master-2 kubenswrapper[4776]: I1011 11:04:12.830894 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-2h6nw"] Oct 11 11:04:12.832433 master-2 kubenswrapper[4776]: I1011 11:04:12.832404 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.835381 master-2 kubenswrapper[4776]: I1011 11:04:12.835322 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Oct 11 11:04:12.836044 master-2 kubenswrapper[4776]: I1011 11:04:12.835983 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Oct 11 11:04:12.854890 master-2 kubenswrapper[4776]: I1011 11:04:12.854820 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-2h6nw"] Oct 11 11:04:12.941130 master-2 kubenswrapper[4776]: I1011 11:04:12.941023 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-scripts\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.942066 master-2 kubenswrapper[4776]: I1011 11:04:12.942023 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/165afc87-c932-440a-931b-99e339c2b038-config-data-merged\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.942149 master-2 kubenswrapper[4776]: I1011 11:04:12.942070 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-amphora-certs\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.942197 master-2 kubenswrapper[4776]: I1011 11:04:12.942172 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-combined-ca-bundle\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.942275 master-2 kubenswrapper[4776]: I1011 11:04:12.942227 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/165afc87-c932-440a-931b-99e339c2b038-hm-ports\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:12.942331 master-2 kubenswrapper[4776]: I1011 11:04:12.942302 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-config-data\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044165 master-2 kubenswrapper[4776]: I1011 11:04:13.044002 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/165afc87-c932-440a-931b-99e339c2b038-config-data-merged\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044165 master-2 kubenswrapper[4776]: I1011 11:04:13.044066 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-amphora-certs\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044165 master-2 kubenswrapper[4776]: I1011 11:04:13.044124 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-combined-ca-bundle\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044165 master-2 kubenswrapper[4776]: I1011 11:04:13.044157 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/165afc87-c932-440a-931b-99e339c2b038-hm-ports\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044659 master-2 kubenswrapper[4776]: I1011 11:04:13.044195 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-config-data\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044659 master-2 kubenswrapper[4776]: I1011 11:04:13.044233 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-scripts\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.044659 master-2 kubenswrapper[4776]: I1011 11:04:13.044523 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/165afc87-c932-440a-931b-99e339c2b038-config-data-merged\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.045795 master-2 kubenswrapper[4776]: I1011 11:04:13.045741 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/165afc87-c932-440a-931b-99e339c2b038-hm-ports\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.047974 master-2 kubenswrapper[4776]: I1011 11:04:13.047916 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-amphora-certs\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.049574 master-2 kubenswrapper[4776]: I1011 11:04:13.049513 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-combined-ca-bundle\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.050014 master-2 kubenswrapper[4776]: I1011 11:04:13.049973 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-scripts\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.050771 master-2 kubenswrapper[4776]: I1011 11:04:13.050699 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165afc87-c932-440a-931b-99e339c2b038-config-data\") pod \"octavia-worker-2h6nw\" (UID: \"165afc87-c932-440a-931b-99e339c2b038\") " pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.175959 master-2 kubenswrapper[4776]: I1011 11:04:13.175825 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:13.240227 master-2 kubenswrapper[4776]: I1011 11:04:13.240126 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4h92c" event={"ID":"a4c01261-6836-4eb3-9dca-826e486273ec","Type":"ContainerStarted","Data":"d48802fad657594e7d8c88578a13ecd2e12055388ddf6c73c39497b6aaad00d7"} Oct 11 11:04:13.241178 master-2 kubenswrapper[4776]: I1011 11:04:13.240437 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:13.279494 master-2 kubenswrapper[4776]: I1011 11:04:13.279429 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-4h92c" podStartSLOduration=5.279410701 podStartE2EDuration="5.279410701s" podCreationTimestamp="2025-10-11 11:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:04:13.274331374 +0000 UTC m=+2288.058758083" watchObservedRunningTime="2025-10-11 11:04:13.279410701 +0000 UTC m=+2288.063837400" Oct 11 11:04:13.734762 master-2 kubenswrapper[4776]: I1011 11:04:13.734319 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-2h6nw"] Oct 11 11:04:13.748588 master-2 kubenswrapper[4776]: W1011 11:04:13.748537 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod165afc87_c932_440a_931b_99e339c2b038.slice/crio-8d52820228f382bf6db9e08a5ed87739272bf5d5e1d0c7c40d2541d2d4a45420 WatchSource:0}: Error finding container 8d52820228f382bf6db9e08a5ed87739272bf5d5e1d0c7c40d2541d2d4a45420: Status 404 returned error can't find the container with id 8d52820228f382bf6db9e08a5ed87739272bf5d5e1d0c7c40d2541d2d4a45420 Oct 11 11:04:14.265198 master-2 kubenswrapper[4776]: I1011 11:04:14.264352 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-2h6nw" event={"ID":"165afc87-c932-440a-931b-99e339c2b038","Type":"ContainerStarted","Data":"8d52820228f382bf6db9e08a5ed87739272bf5d5e1d0c7c40d2541d2d4a45420"} Oct 11 11:04:15.273059 master-2 kubenswrapper[4776]: I1011 11:04:15.272989 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-dsjdj" event={"ID":"b714c56b-9901-470b-ba8d-790c638ddd43","Type":"ContainerStarted","Data":"bd4ee5e48134590a1dd0bfaf0e53ca64e08a46974ba6ae0b270bfd71b5de3f13"} Oct 11 11:04:16.282952 master-2 kubenswrapper[4776]: I1011 11:04:16.282917 4776 generic.go:334] "Generic (PLEG): container finished" podID="b714c56b-9901-470b-ba8d-790c638ddd43" containerID="bd4ee5e48134590a1dd0bfaf0e53ca64e08a46974ba6ae0b270bfd71b5de3f13" exitCode=0 Oct 11 11:04:16.283877 master-2 kubenswrapper[4776]: I1011 11:04:16.282962 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-dsjdj" event={"ID":"b714c56b-9901-470b-ba8d-790c638ddd43","Type":"ContainerDied","Data":"bd4ee5e48134590a1dd0bfaf0e53ca64e08a46974ba6ae0b270bfd71b5de3f13"} Oct 11 11:04:17.045485 master-2 kubenswrapper[4776]: I1011 11:04:17.045376 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-848z9"] Oct 11 11:04:17.056153 master-2 kubenswrapper[4776]: I1011 11:04:17.056102 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-848z9"] Oct 11 11:04:17.318856 master-2 kubenswrapper[4776]: I1011 11:04:17.318287 4776 scope.go:117] "RemoveContainer" containerID="c49bb5e7b7b29373f91460103b137215ef53ea437154f26d2c8b1bb8b44e90c5" Oct 11 11:04:17.322749 master-2 kubenswrapper[4776]: I1011 11:04:17.322660 4776 generic.go:334] "Generic (PLEG): container finished" podID="165afc87-c932-440a-931b-99e339c2b038" containerID="bc0eb0d1dc5809dc09f96697cede8293b058768a79846386115117b790c89478" exitCode=0 Oct 11 11:04:17.322869 master-2 kubenswrapper[4776]: I1011 11:04:17.322781 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-2h6nw" event={"ID":"165afc87-c932-440a-931b-99e339c2b038","Type":"ContainerDied","Data":"bc0eb0d1dc5809dc09f96697cede8293b058768a79846386115117b790c89478"} Oct 11 11:04:17.326409 master-2 kubenswrapper[4776]: I1011 11:04:17.326272 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-dsjdj" event={"ID":"b714c56b-9901-470b-ba8d-790c638ddd43","Type":"ContainerStarted","Data":"bb9a428e96576310c4fd673f948b0b1f89e5c1d96bde39469e92168f764a0b5a"} Oct 11 11:04:17.326476 master-2 kubenswrapper[4776]: I1011 11:04:17.326434 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:17.342703 master-2 kubenswrapper[4776]: I1011 11:04:17.342658 4776 scope.go:117] "RemoveContainer" containerID="fef38235c13a94e0f559cc147b180218c57bb86edee562a4d0f5a4ef26d4a256" Oct 11 11:04:17.381073 master-2 kubenswrapper[4776]: I1011 11:04:17.381009 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-dsjdj" podStartSLOduration=4.775390734 podStartE2EDuration="7.380990891s" podCreationTimestamp="2025-10-11 11:04:10 +0000 UTC" firstStartedPulling="2025-10-11 11:04:11.931054145 +0000 UTC m=+2286.715480854" lastFinishedPulling="2025-10-11 11:04:14.536654302 +0000 UTC m=+2289.321081011" observedRunningTime="2025-10-11 11:04:17.379416969 +0000 UTC m=+2292.163843688" watchObservedRunningTime="2025-10-11 11:04:17.380990891 +0000 UTC m=+2292.165417600" Oct 11 11:04:17.451452 master-2 kubenswrapper[4776]: I1011 11:04:17.451410 4776 scope.go:117] "RemoveContainer" containerID="6868226996ff13456bac113fdafadf8a4b2b3ab56a80c2fc96fb7d2ab46abffa" Oct 11 11:04:17.473351 master-2 kubenswrapper[4776]: I1011 11:04:17.473295 4776 scope.go:117] "RemoveContainer" containerID="8fad6ef13ce32e48a0200e0d51e6ec1869e31fdbae9e43f6b4327a3848b3263c" Oct 11 11:04:18.070665 master-2 kubenswrapper[4776]: I1011 11:04:18.070613 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25099d7a-e434-48d2-a175-088e5ad2caf2" path="/var/lib/kubelet/pods/25099d7a-e434-48d2-a175-088e5ad2caf2/volumes" Oct 11 11:04:18.350901 master-2 kubenswrapper[4776]: I1011 11:04:18.343649 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-2h6nw" event={"ID":"165afc87-c932-440a-931b-99e339c2b038","Type":"ContainerStarted","Data":"d973db2c223668b8c8f8737e8e4a20b4e7eeeb9df8c1114502eb718a02027190"} Oct 11 11:04:18.350901 master-2 kubenswrapper[4776]: I1011 11:04:18.343745 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:18.391955 master-2 kubenswrapper[4776]: I1011 11:04:18.391843 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-2h6nw" podStartSLOduration=4.20683001 podStartE2EDuration="6.391824872s" podCreationTimestamp="2025-10-11 11:04:12 +0000 UTC" firstStartedPulling="2025-10-11 11:04:13.750695608 +0000 UTC m=+2288.535122317" lastFinishedPulling="2025-10-11 11:04:15.93569047 +0000 UTC m=+2290.720117179" observedRunningTime="2025-10-11 11:04:18.385333727 +0000 UTC m=+2293.169760436" watchObservedRunningTime="2025-10-11 11:04:18.391824872 +0000 UTC m=+2293.176251581" Oct 11 11:04:21.041324 master-2 kubenswrapper[4776]: I1011 11:04:21.041271 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8dnfj"] Oct 11 11:04:21.048032 master-2 kubenswrapper[4776]: I1011 11:04:21.047977 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8dnfj"] Oct 11 11:04:22.071342 master-2 kubenswrapper[4776]: I1011 11:04:22.071298 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e873bed5-1a50-4fb0-81b1-2225f4893b28" path="/var/lib/kubelet/pods/e873bed5-1a50-4fb0-81b1-2225f4893b28/volumes" Oct 11 11:04:24.153207 master-2 kubenswrapper[4776]: I1011 11:04:24.153151 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-4h92c" Oct 11 11:04:26.369393 master-2 kubenswrapper[4776]: I1011 11:04:26.369246 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-dsjdj" Oct 11 11:04:28.207312 master-2 kubenswrapper[4776]: I1011 11:04:28.207260 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-2h6nw" Oct 11 11:04:33.064386 master-2 kubenswrapper[4776]: I1011 11:04:33.064319 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-5pz76"] Oct 11 11:04:33.073979 master-2 kubenswrapper[4776]: I1011 11:04:33.073920 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-5pz76"] Oct 11 11:04:34.069666 master-2 kubenswrapper[4776]: I1011 11:04:34.069608 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cef6f34-fa11-4593-b4d8-c9ac415f1967" path="/var/lib/kubelet/pods/7cef6f34-fa11-4593-b4d8-c9ac415f1967/volumes" Oct 11 11:04:36.042564 master-2 kubenswrapper[4776]: I1011 11:04:36.042504 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xptqx"] Oct 11 11:04:36.071797 master-2 kubenswrapper[4776]: I1011 11:04:36.071727 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xptqx"] Oct 11 11:04:38.069182 master-2 kubenswrapper[4776]: I1011 11:04:38.069051 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30996a86-1b86-4a67-bfea-0e63f7417196" path="/var/lib/kubelet/pods/30996a86-1b86-4a67-bfea-0e63f7417196/volumes" Oct 11 11:04:47.307593 master-2 kubenswrapper[4776]: I1011 11:04:47.307522 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-plkrt"] Oct 11 11:04:47.309141 master-2 kubenswrapper[4776]: I1011 11:04:47.309093 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.312364 master-2 kubenswrapper[4776]: I1011 11:04:47.312305 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 11:04:47.312473 master-2 kubenswrapper[4776]: I1011 11:04:47.312305 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 11:04:47.312933 master-2 kubenswrapper[4776]: I1011 11:04:47.312898 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 11:04:47.313066 master-2 kubenswrapper[4776]: I1011 11:04:47.312943 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 11:04:47.313066 master-2 kubenswrapper[4776]: I1011 11:04:47.313037 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"networkers" Oct 11 11:04:47.313549 master-2 kubenswrapper[4776]: I1011 11:04:47.313423 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 11 11:04:47.326844 master-2 kubenswrapper[4776]: I1011 11:04:47.323359 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-plkrt"] Oct 11 11:04:47.393043 master-2 kubenswrapper[4776]: I1011 11:04:47.392971 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-svc\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.393043 master-2 kubenswrapper[4776]: I1011 11:04:47.393025 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-networkers\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.393043 master-2 kubenswrapper[4776]: I1011 11:04:47.393047 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-config\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.393607 master-2 kubenswrapper[4776]: I1011 11:04:47.393080 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-swift-storage-0\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.393607 master-2 kubenswrapper[4776]: I1011 11:04:47.393119 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-nb\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.393607 master-2 kubenswrapper[4776]: I1011 11:04:47.393150 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-sb\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.393607 master-2 kubenswrapper[4776]: I1011 11:04:47.393388 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs8hv\" (UniqueName: \"kubernetes.io/projected/a63f7af9-5ea2-4091-901f-6d9187377785-kube-api-access-qs8hv\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495455 master-2 kubenswrapper[4776]: I1011 11:04:47.495383 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-svc\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495775 master-2 kubenswrapper[4776]: I1011 11:04:47.495465 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-networkers\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495775 master-2 kubenswrapper[4776]: I1011 11:04:47.495497 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-config\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495775 master-2 kubenswrapper[4776]: I1011 11:04:47.495558 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-swift-storage-0\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495775 master-2 kubenswrapper[4776]: I1011 11:04:47.495614 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-nb\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495775 master-2 kubenswrapper[4776]: I1011 11:04:47.495650 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-sb\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.495775 master-2 kubenswrapper[4776]: I1011 11:04:47.495729 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs8hv\" (UniqueName: \"kubernetes.io/projected/a63f7af9-5ea2-4091-901f-6d9187377785-kube-api-access-qs8hv\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.497663 master-2 kubenswrapper[4776]: I1011 11:04:47.497609 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-swift-storage-0\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.498620 master-2 kubenswrapper[4776]: I1011 11:04:47.498416 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-sb\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.498984 master-2 kubenswrapper[4776]: I1011 11:04:47.498942 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-nb\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.499311 master-2 kubenswrapper[4776]: I1011 11:04:47.499256 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-svc\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.500333 master-2 kubenswrapper[4776]: I1011 11:04:47.500270 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-config\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.500490 master-2 kubenswrapper[4776]: I1011 11:04:47.500274 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-networkers\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.521353 master-2 kubenswrapper[4776]: I1011 11:04:47.521214 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs8hv\" (UniqueName: \"kubernetes.io/projected/a63f7af9-5ea2-4091-901f-6d9187377785-kube-api-access-qs8hv\") pod \"dnsmasq-dns-85b986dbf-plkrt\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:47.627961 master-2 kubenswrapper[4776]: I1011 11:04:47.627770 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:48.125638 master-2 kubenswrapper[4776]: I1011 11:04:48.125576 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-plkrt"] Oct 11 11:04:48.614218 master-2 kubenswrapper[4776]: I1011 11:04:48.614019 4776 generic.go:334] "Generic (PLEG): container finished" podID="a63f7af9-5ea2-4091-901f-6d9187377785" containerID="eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22" exitCode=0 Oct 11 11:04:48.614218 master-2 kubenswrapper[4776]: I1011 11:04:48.614081 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" event={"ID":"a63f7af9-5ea2-4091-901f-6d9187377785","Type":"ContainerDied","Data":"eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22"} Oct 11 11:04:48.614218 master-2 kubenswrapper[4776]: I1011 11:04:48.614105 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" event={"ID":"a63f7af9-5ea2-4091-901f-6d9187377785","Type":"ContainerStarted","Data":"7e86f6a0692217b02770a72e3b5288c4aba15fbb51a8be198df074c909a4ebea"} Oct 11 11:04:49.623833 master-2 kubenswrapper[4776]: I1011 11:04:49.623698 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" event={"ID":"a63f7af9-5ea2-4091-901f-6d9187377785","Type":"ContainerStarted","Data":"207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64"} Oct 11 11:04:49.624462 master-2 kubenswrapper[4776]: I1011 11:04:49.623869 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:49.658980 master-2 kubenswrapper[4776]: I1011 11:04:49.658875 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" podStartSLOduration=2.658850683 podStartE2EDuration="2.658850683s" podCreationTimestamp="2025-10-11 11:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:04:49.652742628 +0000 UTC m=+2324.437169337" watchObservedRunningTime="2025-10-11 11:04:49.658850683 +0000 UTC m=+2324.443277392" Oct 11 11:04:57.629937 master-2 kubenswrapper[4776]: I1011 11:04:57.629856 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:04:58.034278 master-2 kubenswrapper[4776]: I1011 11:04:58.034202 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-sbrb9"] Oct 11 11:04:58.036023 master-2 kubenswrapper[4776]: I1011 11:04:58.035991 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.052350 master-2 kubenswrapper[4776]: I1011 11:04:58.052299 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-sbrb9"] Oct 11 11:04:58.169812 master-2 kubenswrapper[4776]: I1011 11:04:58.169709 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt62j\" (UniqueName: \"kubernetes.io/projected/bdab7f7f-97fe-4393-859b-b71f68b588b4-kube-api-access-kt62j\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.169812 master-2 kubenswrapper[4776]: I1011 11:04:58.169809 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-networkers\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.170082 master-2 kubenswrapper[4776]: I1011 11:04:58.169872 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-config\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.170082 master-2 kubenswrapper[4776]: I1011 11:04:58.169894 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-sb\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.170082 master-2 kubenswrapper[4776]: I1011 11:04:58.170040 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-svc\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.170254 master-2 kubenswrapper[4776]: I1011 11:04:58.170217 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-swift-storage-0\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.170324 master-2 kubenswrapper[4776]: I1011 11:04:58.170298 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-nb\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272099 master-2 kubenswrapper[4776]: I1011 11:04:58.272019 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt62j\" (UniqueName: \"kubernetes.io/projected/bdab7f7f-97fe-4393-859b-b71f68b588b4-kube-api-access-kt62j\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272099 master-2 kubenswrapper[4776]: I1011 11:04:58.272087 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-networkers\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272363 master-2 kubenswrapper[4776]: I1011 11:04:58.272139 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-config\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272363 master-2 kubenswrapper[4776]: I1011 11:04:58.272162 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-sb\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272363 master-2 kubenswrapper[4776]: I1011 11:04:58.272193 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-svc\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272363 master-2 kubenswrapper[4776]: I1011 11:04:58.272240 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-swift-storage-0\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.272363 master-2 kubenswrapper[4776]: I1011 11:04:58.272270 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-nb\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.273124 master-2 kubenswrapper[4776]: I1011 11:04:58.273076 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-networkers\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.273187 master-2 kubenswrapper[4776]: I1011 11:04:58.273170 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-nb\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.273426 master-2 kubenswrapper[4776]: I1011 11:04:58.273394 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-sb\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.273806 master-2 kubenswrapper[4776]: I1011 11:04:58.273779 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-svc\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.274041 master-2 kubenswrapper[4776]: I1011 11:04:58.274012 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-swift-storage-0\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.274438 master-2 kubenswrapper[4776]: I1011 11:04:58.274411 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-config\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.293068 master-2 kubenswrapper[4776]: I1011 11:04:58.292970 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt62j\" (UniqueName: \"kubernetes.io/projected/bdab7f7f-97fe-4393-859b-b71f68b588b4-kube-api-access-kt62j\") pod \"dnsmasq-dns-59dd57778c-sbrb9\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.357462 master-2 kubenswrapper[4776]: I1011 11:04:58.357407 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:04:58.873203 master-2 kubenswrapper[4776]: I1011 11:04:58.873135 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-sbrb9"] Oct 11 11:04:59.727479 master-2 kubenswrapper[4776]: I1011 11:04:59.727270 4776 generic.go:334] "Generic (PLEG): container finished" podID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerID="2822f6bcfdf6e3e239b9eb0a0c752c195651ceefe7ca225b476e6aad99bf8b53" exitCode=0 Oct 11 11:04:59.727479 master-2 kubenswrapper[4776]: I1011 11:04:59.727383 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" event={"ID":"bdab7f7f-97fe-4393-859b-b71f68b588b4","Type":"ContainerDied","Data":"2822f6bcfdf6e3e239b9eb0a0c752c195651ceefe7ca225b476e6aad99bf8b53"} Oct 11 11:04:59.727479 master-2 kubenswrapper[4776]: I1011 11:04:59.727417 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" event={"ID":"bdab7f7f-97fe-4393-859b-b71f68b588b4","Type":"ContainerStarted","Data":"47eafd73d1b09f5fdc1106e16dc957897cec09f84dde7bc47e0e3b86cb99cab7"} Oct 11 11:05:00.744876 master-2 kubenswrapper[4776]: I1011 11:05:00.744659 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" event={"ID":"bdab7f7f-97fe-4393-859b-b71f68b588b4","Type":"ContainerStarted","Data":"a642a0d29a9271c87588091a04275c01a52f4245f7c7b0cb7fa10b6f9d71d7b7"} Oct 11 11:05:00.745826 master-2 kubenswrapper[4776]: I1011 11:05:00.745160 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:05:00.786767 master-2 kubenswrapper[4776]: I1011 11:05:00.786668 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" podStartSLOduration=2.786648316 podStartE2EDuration="2.786648316s" podCreationTimestamp="2025-10-11 11:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:05:00.779946824 +0000 UTC m=+2335.564373533" watchObservedRunningTime="2025-10-11 11:05:00.786648316 +0000 UTC m=+2335.571075025" Oct 11 11:05:01.058395 master-2 kubenswrapper[4776]: I1011 11:05:01.058230 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-nz82h"] Oct 11 11:05:01.072839 master-2 kubenswrapper[4776]: I1011 11:05:01.072786 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-nz82h"] Oct 11 11:05:02.071147 master-2 kubenswrapper[4776]: I1011 11:05:02.071084 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="005f2579-b848-40fd-b3f3-2d3383344047" path="/var/lib/kubelet/pods/005f2579-b848-40fd-b3f3-2d3383344047/volumes" Oct 11 11:05:03.052964 master-2 kubenswrapper[4776]: I1011 11:05:03.052907 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-2dgxj"] Oct 11 11:05:03.060240 master-2 kubenswrapper[4776]: I1011 11:05:03.060175 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b5802-db-sync-4sh7r"] Oct 11 11:05:03.073291 master-2 kubenswrapper[4776]: I1011 11:05:03.073221 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b5802-db-sync-4sh7r"] Oct 11 11:05:03.078839 master-2 kubenswrapper[4776]: I1011 11:05:03.078367 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-2dgxj"] Oct 11 11:05:04.067472 master-2 kubenswrapper[4776]: I1011 11:05:04.067410 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f2a1bf-160f-40ad-bc2c-a7286a90b988" path="/var/lib/kubelet/pods/c4f2a1bf-160f-40ad-bc2c-a7286a90b988/volumes" Oct 11 11:05:04.068162 master-2 kubenswrapper[4776]: I1011 11:05:04.068129 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90a5c6e-6cd2-4396-b38c-dc0e03da9d38" path="/var/lib/kubelet/pods/d90a5c6e-6cd2-4396-b38c-dc0e03da9d38/volumes" Oct 11 11:05:08.359657 master-2 kubenswrapper[4776]: I1011 11:05:08.359596 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:05:13.064008 master-2 kubenswrapper[4776]: I1011 11:05:13.063923 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-sync-n7nm2"] Oct 11 11:05:13.073332 master-2 kubenswrapper[4776]: I1011 11:05:13.073246 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-sync-n7nm2"] Oct 11 11:05:14.119935 master-2 kubenswrapper[4776]: I1011 11:05:14.119827 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a1c4d38-1f25-4465-9976-43be28a3b282" path="/var/lib/kubelet/pods/4a1c4d38-1f25-4465-9976-43be28a3b282/volumes" Oct 11 11:05:17.588173 master-2 kubenswrapper[4776]: I1011 11:05:17.588107 4776 scope.go:117] "RemoveContainer" containerID="076657b6f64fe541979b78922896c11acb7556c05c2c695c6e5189bd99136f77" Oct 11 11:05:17.659066 master-2 kubenswrapper[4776]: I1011 11:05:17.658899 4776 scope.go:117] "RemoveContainer" containerID="a51a44def94d3717c345488e50d0f116c0777018eb329e198859a513f723b71f" Oct 11 11:05:17.695992 master-2 kubenswrapper[4776]: I1011 11:05:17.695934 4776 scope.go:117] "RemoveContainer" containerID="5943503815e84fefb31e290c68a69b97ca3d79be2036bfcb024f274a60831171" Oct 11 11:05:17.754787 master-2 kubenswrapper[4776]: I1011 11:05:17.754725 4776 scope.go:117] "RemoveContainer" containerID="f7b2a1a1f6cfb4760b3cd06bae0a54d2a7eb0b82c31eb466cd4d45304c8c9826" Oct 11 11:05:17.842381 master-2 kubenswrapper[4776]: I1011 11:05:17.842087 4776 scope.go:117] "RemoveContainer" containerID="9cbf7752342665c7e92852ff9e1ea1e5f0d5dc7a3ede8988348adb50918c085e" Oct 11 11:05:17.875700 master-2 kubenswrapper[4776]: I1011 11:05:17.875641 4776 scope.go:117] "RemoveContainer" containerID="3eac08206e51e42747d734fbad286ecc138ff94d119ee5ffe85a0b9dac4348e7" Oct 11 11:05:17.913874 master-2 kubenswrapper[4776]: I1011 11:05:17.913828 4776 scope.go:117] "RemoveContainer" containerID="b7e0d5acf6bbdc53a5ba11187ce29782ecfb6106125d3631307f9dca40bcd06a" Oct 11 11:05:17.969264 master-2 kubenswrapper[4776]: I1011 11:05:17.969217 4776 scope.go:117] "RemoveContainer" containerID="7a37bc55a741b7925fe73a8333e051e4eed1c5b9263c43dfa0598438fa7d12fc" Oct 11 11:05:18.029064 master-2 kubenswrapper[4776]: I1011 11:05:18.029015 4776 scope.go:117] "RemoveContainer" containerID="5a93f178b03e320516bcd32c99e245334eeef09b36cb1fbff0b5d12e1d56145d" Oct 11 11:05:18.988716 master-2 kubenswrapper[4776]: I1011 11:05:18.987988 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-plkrt"] Oct 11 11:05:18.988716 master-2 kubenswrapper[4776]: I1011 11:05:18.988254 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" containerName="dnsmasq-dns" containerID="cri-o://207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64" gracePeriod=10 Oct 11 11:05:19.670701 master-2 kubenswrapper[4776]: I1011 11:05:19.668269 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:05:19.755571 master-2 kubenswrapper[4776]: I1011 11:05:19.755494 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-svc\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.755824 master-2 kubenswrapper[4776]: I1011 11:05:19.755656 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-swift-storage-0\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.755824 master-2 kubenswrapper[4776]: I1011 11:05:19.755745 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs8hv\" (UniqueName: \"kubernetes.io/projected/a63f7af9-5ea2-4091-901f-6d9187377785-kube-api-access-qs8hv\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.755824 master-2 kubenswrapper[4776]: I1011 11:05:19.755796 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-nb\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.755955 master-2 kubenswrapper[4776]: I1011 11:05:19.755865 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-networkers\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.755955 master-2 kubenswrapper[4776]: I1011 11:05:19.755902 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-sb\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.756024 master-2 kubenswrapper[4776]: I1011 11:05:19.755953 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-config\") pod \"a63f7af9-5ea2-4091-901f-6d9187377785\" (UID: \"a63f7af9-5ea2-4091-901f-6d9187377785\") " Oct 11 11:05:19.761776 master-2 kubenswrapper[4776]: I1011 11:05:19.761712 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a63f7af9-5ea2-4091-901f-6d9187377785-kube-api-access-qs8hv" (OuterVolumeSpecName: "kube-api-access-qs8hv") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "kube-api-access-qs8hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:05:19.807423 master-2 kubenswrapper[4776]: I1011 11:05:19.807363 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:19.811257 master-2 kubenswrapper[4776]: I1011 11:05:19.811194 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:19.812920 master-2 kubenswrapper[4776]: I1011 11:05:19.812849 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:19.816874 master-2 kubenswrapper[4776]: I1011 11:05:19.816770 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-config" (OuterVolumeSpecName: "config") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:19.822868 master-2 kubenswrapper[4776]: I1011 11:05:19.822786 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:19.832317 master-2 kubenswrapper[4776]: I1011 11:05:19.832250 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-networkers" (OuterVolumeSpecName: "networkers") pod "a63f7af9-5ea2-4091-901f-6d9187377785" (UID: "a63f7af9-5ea2-4091-901f-6d9187377785"). InnerVolumeSpecName "networkers". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:19.858619 master-2 kubenswrapper[4776]: I1011 11:05:19.858558 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.858619 master-2 kubenswrapper[4776]: I1011 11:05:19.858605 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-config\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.858619 master-2 kubenswrapper[4776]: I1011 11:05:19.858614 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.858619 master-2 kubenswrapper[4776]: I1011 11:05:19.858623 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.858828 master-2 kubenswrapper[4776]: I1011 11:05:19.858634 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs8hv\" (UniqueName: \"kubernetes.io/projected/a63f7af9-5ea2-4091-901f-6d9187377785-kube-api-access-qs8hv\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.858828 master-2 kubenswrapper[4776]: I1011 11:05:19.858643 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.858828 master-2 kubenswrapper[4776]: I1011 11:05:19.858651 4776 reconciler_common.go:293] "Volume detached for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/a63f7af9-5ea2-4091-901f-6d9187377785-networkers\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:19.967793 master-2 kubenswrapper[4776]: I1011 11:05:19.967714 4776 generic.go:334] "Generic (PLEG): container finished" podID="a63f7af9-5ea2-4091-901f-6d9187377785" containerID="207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64" exitCode=0 Oct 11 11:05:19.967793 master-2 kubenswrapper[4776]: I1011 11:05:19.967789 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" event={"ID":"a63f7af9-5ea2-4091-901f-6d9187377785","Type":"ContainerDied","Data":"207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64"} Oct 11 11:05:19.968066 master-2 kubenswrapper[4776]: I1011 11:05:19.967821 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" Oct 11 11:05:19.968066 master-2 kubenswrapper[4776]: I1011 11:05:19.967839 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85b986dbf-plkrt" event={"ID":"a63f7af9-5ea2-4091-901f-6d9187377785","Type":"ContainerDied","Data":"7e86f6a0692217b02770a72e3b5288c4aba15fbb51a8be198df074c909a4ebea"} Oct 11 11:05:19.968066 master-2 kubenswrapper[4776]: I1011 11:05:19.967887 4776 scope.go:117] "RemoveContainer" containerID="207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64" Oct 11 11:05:19.993345 master-2 kubenswrapper[4776]: I1011 11:05:19.993281 4776 scope.go:117] "RemoveContainer" containerID="eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22" Oct 11 11:05:20.020697 master-2 kubenswrapper[4776]: I1011 11:05:20.020628 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-plkrt"] Oct 11 11:05:20.029537 master-2 kubenswrapper[4776]: I1011 11:05:20.029420 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85b986dbf-plkrt"] Oct 11 11:05:20.031443 master-2 kubenswrapper[4776]: I1011 11:05:20.031345 4776 scope.go:117] "RemoveContainer" containerID="207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64" Oct 11 11:05:20.032062 master-2 kubenswrapper[4776]: E1011 11:05:20.032012 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64\": container with ID starting with 207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64 not found: ID does not exist" containerID="207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64" Oct 11 11:05:20.032121 master-2 kubenswrapper[4776]: I1011 11:05:20.032072 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64"} err="failed to get container status \"207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64\": rpc error: code = NotFound desc = could not find container \"207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64\": container with ID starting with 207dfd6a32f7fefd4abd47c476fa78d8ebcad7b50d253051faa7cac8f3b02e64 not found: ID does not exist" Oct 11 11:05:20.032121 master-2 kubenswrapper[4776]: I1011 11:05:20.032111 4776 scope.go:117] "RemoveContainer" containerID="eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22" Oct 11 11:05:20.032661 master-2 kubenswrapper[4776]: E1011 11:05:20.032611 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22\": container with ID starting with eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22 not found: ID does not exist" containerID="eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22" Oct 11 11:05:20.032740 master-2 kubenswrapper[4776]: I1011 11:05:20.032690 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22"} err="failed to get container status \"eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22\": rpc error: code = NotFound desc = could not find container \"eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22\": container with ID starting with eb352efe378a73f599c441374fa88609c6cdd75b37e89d6d6115d30787123e22 not found: ID does not exist" Oct 11 11:05:20.073328 master-2 kubenswrapper[4776]: I1011 11:05:20.073154 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" path="/var/lib/kubelet/pods/a63f7af9-5ea2-4091-901f-6d9187377785/volumes" Oct 11 11:05:40.571503 master-2 kubenswrapper[4776]: I1011 11:05:40.570178 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-sbrb9"] Oct 11 11:05:40.571503 master-2 kubenswrapper[4776]: I1011 11:05:40.570908 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerName="dnsmasq-dns" containerID="cri-o://a642a0d29a9271c87588091a04275c01a52f4245f7c7b0cb7fa10b6f9d71d7b7" gracePeriod=10 Oct 11 11:05:41.151253 master-2 kubenswrapper[4776]: I1011 11:05:41.151190 4776 generic.go:334] "Generic (PLEG): container finished" podID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerID="a642a0d29a9271c87588091a04275c01a52f4245f7c7b0cb7fa10b6f9d71d7b7" exitCode=0 Oct 11 11:05:41.151253 master-2 kubenswrapper[4776]: I1011 11:05:41.151240 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" event={"ID":"bdab7f7f-97fe-4393-859b-b71f68b588b4","Type":"ContainerDied","Data":"a642a0d29a9271c87588091a04275c01a52f4245f7c7b0cb7fa10b6f9d71d7b7"} Oct 11 11:05:41.302995 master-2 kubenswrapper[4776]: I1011 11:05:41.302909 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:05:41.448526 master-2 kubenswrapper[4776]: I1011 11:05:41.448461 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-nb\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.448791 master-2 kubenswrapper[4776]: I1011 11:05:41.448606 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-swift-storage-0\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.448791 master-2 kubenswrapper[4776]: I1011 11:05:41.448740 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-config\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.448873 master-2 kubenswrapper[4776]: I1011 11:05:41.448802 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-svc\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.449927 master-2 kubenswrapper[4776]: I1011 11:05:41.448889 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-sb\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.449927 master-2 kubenswrapper[4776]: I1011 11:05:41.448934 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-networkers\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.449927 master-2 kubenswrapper[4776]: I1011 11:05:41.448959 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt62j\" (UniqueName: \"kubernetes.io/projected/bdab7f7f-97fe-4393-859b-b71f68b588b4-kube-api-access-kt62j\") pod \"bdab7f7f-97fe-4393-859b-b71f68b588b4\" (UID: \"bdab7f7f-97fe-4393-859b-b71f68b588b4\") " Oct 11 11:05:41.456154 master-2 kubenswrapper[4776]: I1011 11:05:41.456079 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdab7f7f-97fe-4393-859b-b71f68b588b4-kube-api-access-kt62j" (OuterVolumeSpecName: "kube-api-access-kt62j") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "kube-api-access-kt62j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:05:41.494464 master-2 kubenswrapper[4776]: I1011 11:05:41.494410 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:41.497240 master-2 kubenswrapper[4776]: I1011 11:05:41.497155 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-config" (OuterVolumeSpecName: "config") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:41.498049 master-2 kubenswrapper[4776]: I1011 11:05:41.497992 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:41.503582 master-2 kubenswrapper[4776]: I1011 11:05:41.501097 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:41.512261 master-2 kubenswrapper[4776]: I1011 11:05:41.512216 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:41.518294 master-2 kubenswrapper[4776]: I1011 11:05:41.517648 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-networkers" (OuterVolumeSpecName: "networkers") pod "bdab7f7f-97fe-4393-859b-b71f68b588b4" (UID: "bdab7f7f-97fe-4393-859b-b71f68b588b4"). InnerVolumeSpecName "networkers". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:05:41.551309 master-2 kubenswrapper[4776]: I1011 11:05:41.551245 4776 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-swift-storage-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:41.551309 master-2 kubenswrapper[4776]: I1011 11:05:41.551298 4776 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-config\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:41.551309 master-2 kubenswrapper[4776]: I1011 11:05:41.551312 4776 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-dns-svc\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:41.551670 master-2 kubenswrapper[4776]: I1011 11:05:41.551326 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-sb\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:41.551670 master-2 kubenswrapper[4776]: I1011 11:05:41.551338 4776 reconciler_common.go:293] "Volume detached for volume \"networkers\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-networkers\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:41.551670 master-2 kubenswrapper[4776]: I1011 11:05:41.551349 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt62j\" (UniqueName: \"kubernetes.io/projected/bdab7f7f-97fe-4393-859b-b71f68b588b4-kube-api-access-kt62j\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:41.551670 master-2 kubenswrapper[4776]: I1011 11:05:41.551360 4776 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdab7f7f-97fe-4393-859b-b71f68b588b4-ovsdbserver-nb\") on node \"master-2\" DevicePath \"\"" Oct 11 11:05:42.165389 master-2 kubenswrapper[4776]: I1011 11:05:42.165296 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" event={"ID":"bdab7f7f-97fe-4393-859b-b71f68b588b4","Type":"ContainerDied","Data":"47eafd73d1b09f5fdc1106e16dc957897cec09f84dde7bc47e0e3b86cb99cab7"} Oct 11 11:05:42.165389 master-2 kubenswrapper[4776]: I1011 11:05:42.165358 4776 scope.go:117] "RemoveContainer" containerID="a642a0d29a9271c87588091a04275c01a52f4245f7c7b0cb7fa10b6f9d71d7b7" Oct 11 11:05:42.165389 master-2 kubenswrapper[4776]: I1011 11:05:42.165380 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dd57778c-sbrb9" Oct 11 11:05:42.197776 master-2 kubenswrapper[4776]: I1011 11:05:42.196458 4776 scope.go:117] "RemoveContainer" containerID="2822f6bcfdf6e3e239b9eb0a0c752c195651ceefe7ca225b476e6aad99bf8b53" Oct 11 11:05:42.207166 master-2 kubenswrapper[4776]: I1011 11:05:42.207117 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-sbrb9"] Oct 11 11:05:42.213423 master-2 kubenswrapper[4776]: I1011 11:05:42.213370 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59dd57778c-sbrb9"] Oct 11 11:05:44.091702 master-2 kubenswrapper[4776]: I1011 11:05:44.091177 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" path="/var/lib/kubelet/pods/bdab7f7f-97fe-4393-859b-b71f68b588b4/volumes" Oct 11 11:05:53.033727 master-2 kubenswrapper[4776]: I1011 11:05:53.033658 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-faff-account-create-nc9gw"] Oct 11 11:05:53.043870 master-2 kubenswrapper[4776]: I1011 11:05:53.043745 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-faff-account-create-nc9gw"] Oct 11 11:05:54.070477 master-2 kubenswrapper[4776]: I1011 11:05:54.070412 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be6ef0ad-fb25-4af2-a9fc-c89be4b1983b" path="/var/lib/kubelet/pods/be6ef0ad-fb25-4af2-a9fc-c89be4b1983b/volumes" Oct 11 11:06:00.992540 master-2 kubenswrapper[4776]: I1011 11:06:00.992437 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-networker-deploy-networkers-hpjcr"] Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: E1011 11:06:00.993139 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerName="dnsmasq-dns" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: I1011 11:06:00.993165 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerName="dnsmasq-dns" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: E1011 11:06:00.993195 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" containerName="dnsmasq-dns" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: I1011 11:06:00.993204 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" containerName="dnsmasq-dns" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: E1011 11:06:00.993245 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerName="init" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: I1011 11:06:00.993254 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerName="init" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: E1011 11:06:00.993277 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" containerName="init" Oct 11 11:06:00.993387 master-2 kubenswrapper[4776]: I1011 11:06:00.993285 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" containerName="init" Oct 11 11:06:00.993723 master-2 kubenswrapper[4776]: I1011 11:06:00.993531 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdab7f7f-97fe-4393-859b-b71f68b588b4" containerName="dnsmasq-dns" Oct 11 11:06:00.993723 master-2 kubenswrapper[4776]: I1011 11:06:00.993559 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="a63f7af9-5ea2-4091-901f-6d9187377785" containerName="dnsmasq-dns" Oct 11 11:06:00.995050 master-2 kubenswrapper[4776]: I1011 11:06:00.994799 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:00.998024 master-2 kubenswrapper[4776]: I1011 11:06:00.997973 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:06:00.998300 master-2 kubenswrapper[4776]: I1011 11:06:00.998253 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:06:00.998458 master-2 kubenswrapper[4776]: I1011 11:06:00.998278 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:06:01.007494 master-2 kubenswrapper[4776]: I1011 11:06:01.007445 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-networker-deploy-networkers-hpjcr"] Oct 11 11:06:01.068573 master-2 kubenswrapper[4776]: I1011 11:06:01.068482 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-inventory\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.068573 master-2 kubenswrapper[4776]: I1011 11:06:01.068584 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-ssh-key\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.068991 master-2 kubenswrapper[4776]: I1011 11:06:01.068714 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxlr\" (UniqueName: \"kubernetes.io/projected/e0d073f2-1387-41f9-9e3d-71e1057293f9-kube-api-access-4fxlr\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.068991 master-2 kubenswrapper[4776]: I1011 11:06:01.068821 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-bootstrap-combined-ca-bundle\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.170827 master-2 kubenswrapper[4776]: I1011 11:06:01.170775 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-ssh-key\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.171066 master-2 kubenswrapper[4776]: I1011 11:06:01.170884 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxlr\" (UniqueName: \"kubernetes.io/projected/e0d073f2-1387-41f9-9e3d-71e1057293f9-kube-api-access-4fxlr\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.171066 master-2 kubenswrapper[4776]: I1011 11:06:01.170981 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-bootstrap-combined-ca-bundle\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.171185 master-2 kubenswrapper[4776]: I1011 11:06:01.171167 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-inventory\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.175027 master-2 kubenswrapper[4776]: I1011 11:06:01.175009 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-inventory\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.175392 master-2 kubenswrapper[4776]: I1011 11:06:01.175365 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-bootstrap-combined-ca-bundle\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.190634 master-2 kubenswrapper[4776]: I1011 11:06:01.190450 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-ssh-key\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.194489 master-2 kubenswrapper[4776]: I1011 11:06:01.194457 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxlr\" (UniqueName: \"kubernetes.io/projected/e0d073f2-1387-41f9-9e3d-71e1057293f9-kube-api-access-4fxlr\") pod \"bootstrap-networker-deploy-networkers-hpjcr\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.326285 master-2 kubenswrapper[4776]: I1011 11:06:01.326221 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:06:01.948394 master-2 kubenswrapper[4776]: I1011 11:06:01.948250 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-networker-deploy-networkers-hpjcr"] Oct 11 11:06:01.952611 master-2 kubenswrapper[4776]: W1011 11:06:01.952541 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0d073f2_1387_41f9_9e3d_71e1057293f9.slice/crio-edc4f9119e96838dd2383ba29e5f21de62558eb73ab2b4b283c2439ebc73cdd8 WatchSource:0}: Error finding container edc4f9119e96838dd2383ba29e5f21de62558eb73ab2b4b283c2439ebc73cdd8: Status 404 returned error can't find the container with id edc4f9119e96838dd2383ba29e5f21de62558eb73ab2b4b283c2439ebc73cdd8 Oct 11 11:06:02.386170 master-2 kubenswrapper[4776]: I1011 11:06:02.386095 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" event={"ID":"e0d073f2-1387-41f9-9e3d-71e1057293f9","Type":"ContainerStarted","Data":"edc4f9119e96838dd2383ba29e5f21de62558eb73ab2b4b283c2439ebc73cdd8"} Oct 11 11:06:11.122964 master-2 kubenswrapper[4776]: I1011 11:06:11.122901 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:06:11.486404 master-2 kubenswrapper[4776]: I1011 11:06:11.484899 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" event={"ID":"e0d073f2-1387-41f9-9e3d-71e1057293f9","Type":"ContainerStarted","Data":"b093ff59d085519739327a515a07ac3ab6962069ac919c6da772163834377b3f"} Oct 11 11:06:11.521756 master-2 kubenswrapper[4776]: I1011 11:06:11.521284 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" podStartSLOduration=2.3576906429999998 podStartE2EDuration="11.521267918s" podCreationTimestamp="2025-10-11 11:06:00 +0000 UTC" firstStartedPulling="2025-10-11 11:06:01.95640258 +0000 UTC m=+2396.740829289" lastFinishedPulling="2025-10-11 11:06:11.119979855 +0000 UTC m=+2405.904406564" observedRunningTime="2025-10-11 11:06:11.515404438 +0000 UTC m=+2406.299831147" watchObservedRunningTime="2025-10-11 11:06:11.521267918 +0000 UTC m=+2406.305694627" Oct 11 11:06:18.166218 master-2 kubenswrapper[4776]: I1011 11:06:18.166155 4776 scope.go:117] "RemoveContainer" containerID="b466abf7a0b4707a238b9568c5f5c7ad243418122b1d4aff19889a45820a6369" Oct 11 11:07:28.706344 master-2 kubenswrapper[4776]: I1011 11:07:28.705891 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lkxll"] Oct 11 11:07:28.710441 master-2 kubenswrapper[4776]: I1011 11:07:28.708643 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.721466 master-2 kubenswrapper[4776]: I1011 11:07:28.721389 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lkxll"] Oct 11 11:07:28.787203 master-2 kubenswrapper[4776]: I1011 11:07:28.787138 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-catalog-content\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.787468 master-2 kubenswrapper[4776]: I1011 11:07:28.787408 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-utilities\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.787649 master-2 kubenswrapper[4776]: I1011 11:07:28.787617 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqjbc\" (UniqueName: \"kubernetes.io/projected/d3740bfc-2abd-4b82-897e-ce53c4fa4324-kube-api-access-jqjbc\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.889818 master-2 kubenswrapper[4776]: I1011 11:07:28.889450 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-catalog-content\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.890087 master-2 kubenswrapper[4776]: I1011 11:07:28.889836 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-utilities\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.890087 master-2 kubenswrapper[4776]: I1011 11:07:28.889943 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqjbc\" (UniqueName: \"kubernetes.io/projected/d3740bfc-2abd-4b82-897e-ce53c4fa4324-kube-api-access-jqjbc\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.890202 master-2 kubenswrapper[4776]: I1011 11:07:28.890163 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-catalog-content\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.890467 master-2 kubenswrapper[4776]: I1011 11:07:28.890438 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-utilities\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:28.914755 master-2 kubenswrapper[4776]: I1011 11:07:28.914695 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqjbc\" (UniqueName: \"kubernetes.io/projected/d3740bfc-2abd-4b82-897e-ce53c4fa4324-kube-api-access-jqjbc\") pod \"certified-operators-lkxll\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:29.039367 master-2 kubenswrapper[4776]: I1011 11:07:29.039191 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:29.501853 master-2 kubenswrapper[4776]: I1011 11:07:29.501784 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lkxll"] Oct 11 11:07:29.505986 master-2 kubenswrapper[4776]: W1011 11:07:29.505922 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3740bfc_2abd_4b82_897e_ce53c4fa4324.slice/crio-5d62d892e763fd982d96f4fde2f5d10b2f577a9b0d439d782eb4ae9756201327 WatchSource:0}: Error finding container 5d62d892e763fd982d96f4fde2f5d10b2f577a9b0d439d782eb4ae9756201327: Status 404 returned error can't find the container with id 5d62d892e763fd982d96f4fde2f5d10b2f577a9b0d439d782eb4ae9756201327 Oct 11 11:07:30.263704 master-2 kubenswrapper[4776]: I1011 11:07:30.263618 4776 generic.go:334] "Generic (PLEG): container finished" podID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerID="771cf7d2c437cbd2b066674b2e8953bdd61419d156e2d9fd0d38039d3abbcdfa" exitCode=0 Oct 11 11:07:30.264299 master-2 kubenswrapper[4776]: I1011 11:07:30.263739 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkxll" event={"ID":"d3740bfc-2abd-4b82-897e-ce53c4fa4324","Type":"ContainerDied","Data":"771cf7d2c437cbd2b066674b2e8953bdd61419d156e2d9fd0d38039d3abbcdfa"} Oct 11 11:07:30.264299 master-2 kubenswrapper[4776]: I1011 11:07:30.263868 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkxll" event={"ID":"d3740bfc-2abd-4b82-897e-ce53c4fa4324","Type":"ContainerStarted","Data":"5d62d892e763fd982d96f4fde2f5d10b2f577a9b0d439d782eb4ae9756201327"} Oct 11 11:07:31.276055 master-2 kubenswrapper[4776]: I1011 11:07:31.275975 4776 generic.go:334] "Generic (PLEG): container finished" podID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerID="9195079ba196503bb72f278a15987d3d4d6cfdbe5832ba4ad851edf5a3520416" exitCode=0 Oct 11 11:07:31.276055 master-2 kubenswrapper[4776]: I1011 11:07:31.276051 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkxll" event={"ID":"d3740bfc-2abd-4b82-897e-ce53c4fa4324","Type":"ContainerDied","Data":"9195079ba196503bb72f278a15987d3d4d6cfdbe5832ba4ad851edf5a3520416"} Oct 11 11:07:32.287063 master-2 kubenswrapper[4776]: I1011 11:07:32.286986 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkxll" event={"ID":"d3740bfc-2abd-4b82-897e-ce53c4fa4324","Type":"ContainerStarted","Data":"77b9a96c3d235a44cb31104f6ad3f5c6bb437f1efad3dbcfebf2e2dd8c332917"} Oct 11 11:07:32.324710 master-2 kubenswrapper[4776]: I1011 11:07:32.324551 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lkxll" podStartSLOduration=2.90910971 podStartE2EDuration="4.324532942s" podCreationTimestamp="2025-10-11 11:07:28 +0000 UTC" firstStartedPulling="2025-10-11 11:07:30.268080989 +0000 UTC m=+2485.052507708" lastFinishedPulling="2025-10-11 11:07:31.683504221 +0000 UTC m=+2486.467930940" observedRunningTime="2025-10-11 11:07:32.316186126 +0000 UTC m=+2487.100612845" watchObservedRunningTime="2025-10-11 11:07:32.324532942 +0000 UTC m=+2487.108959651" Oct 11 11:07:39.039520 master-2 kubenswrapper[4776]: I1011 11:07:39.039387 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:39.040451 master-2 kubenswrapper[4776]: I1011 11:07:39.040430 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:39.100217 master-2 kubenswrapper[4776]: I1011 11:07:39.100162 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:39.390078 master-2 kubenswrapper[4776]: I1011 11:07:39.389968 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:39.471698 master-2 kubenswrapper[4776]: I1011 11:07:39.471590 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lkxll"] Oct 11 11:07:41.365597 master-2 kubenswrapper[4776]: I1011 11:07:41.365519 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lkxll" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="registry-server" containerID="cri-o://77b9a96c3d235a44cb31104f6ad3f5c6bb437f1efad3dbcfebf2e2dd8c332917" gracePeriod=2 Oct 11 11:07:42.385798 master-2 kubenswrapper[4776]: I1011 11:07:42.385735 4776 generic.go:334] "Generic (PLEG): container finished" podID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerID="77b9a96c3d235a44cb31104f6ad3f5c6bb437f1efad3dbcfebf2e2dd8c332917" exitCode=0 Oct 11 11:07:42.385798 master-2 kubenswrapper[4776]: I1011 11:07:42.385784 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkxll" event={"ID":"d3740bfc-2abd-4b82-897e-ce53c4fa4324","Type":"ContainerDied","Data":"77b9a96c3d235a44cb31104f6ad3f5c6bb437f1efad3dbcfebf2e2dd8c332917"} Oct 11 11:07:42.653149 master-2 kubenswrapper[4776]: I1011 11:07:42.653091 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:42.757630 master-2 kubenswrapper[4776]: I1011 11:07:42.757580 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-catalog-content\") pod \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " Oct 11 11:07:42.757984 master-2 kubenswrapper[4776]: I1011 11:07:42.757964 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-utilities\") pod \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " Oct 11 11:07:42.758312 master-2 kubenswrapper[4776]: I1011 11:07:42.758291 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqjbc\" (UniqueName: \"kubernetes.io/projected/d3740bfc-2abd-4b82-897e-ce53c4fa4324-kube-api-access-jqjbc\") pod \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\" (UID: \"d3740bfc-2abd-4b82-897e-ce53c4fa4324\") " Oct 11 11:07:42.759822 master-2 kubenswrapper[4776]: I1011 11:07:42.759768 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-utilities" (OuterVolumeSpecName: "utilities") pod "d3740bfc-2abd-4b82-897e-ce53c4fa4324" (UID: "d3740bfc-2abd-4b82-897e-ce53c4fa4324"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:07:42.763721 master-2 kubenswrapper[4776]: I1011 11:07:42.763694 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3740bfc-2abd-4b82-897e-ce53c4fa4324-kube-api-access-jqjbc" (OuterVolumeSpecName: "kube-api-access-jqjbc") pod "d3740bfc-2abd-4b82-897e-ce53c4fa4324" (UID: "d3740bfc-2abd-4b82-897e-ce53c4fa4324"). InnerVolumeSpecName "kube-api-access-jqjbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:07:42.800868 master-2 kubenswrapper[4776]: I1011 11:07:42.800815 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3740bfc-2abd-4b82-897e-ce53c4fa4324" (UID: "d3740bfc-2abd-4b82-897e-ce53c4fa4324"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:07:42.859955 master-2 kubenswrapper[4776]: I1011 11:07:42.859841 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqjbc\" (UniqueName: \"kubernetes.io/projected/d3740bfc-2abd-4b82-897e-ce53c4fa4324-kube-api-access-jqjbc\") on node \"master-2\" DevicePath \"\"" Oct 11 11:07:42.859955 master-2 kubenswrapper[4776]: I1011 11:07:42.859880 4776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-catalog-content\") on node \"master-2\" DevicePath \"\"" Oct 11 11:07:42.859955 master-2 kubenswrapper[4776]: I1011 11:07:42.859889 4776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3740bfc-2abd-4b82-897e-ce53c4fa4324-utilities\") on node \"master-2\" DevicePath \"\"" Oct 11 11:07:43.422723 master-2 kubenswrapper[4776]: I1011 11:07:43.422628 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkxll" event={"ID":"d3740bfc-2abd-4b82-897e-ce53c4fa4324","Type":"ContainerDied","Data":"5d62d892e763fd982d96f4fde2f5d10b2f577a9b0d439d782eb4ae9756201327"} Oct 11 11:07:43.423442 master-2 kubenswrapper[4776]: I1011 11:07:43.422788 4776 scope.go:117] "RemoveContainer" containerID="77b9a96c3d235a44cb31104f6ad3f5c6bb437f1efad3dbcfebf2e2dd8c332917" Oct 11 11:07:43.423442 master-2 kubenswrapper[4776]: I1011 11:07:43.422828 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkxll" Oct 11 11:07:43.455993 master-2 kubenswrapper[4776]: I1011 11:07:43.455949 4776 scope.go:117] "RemoveContainer" containerID="9195079ba196503bb72f278a15987d3d4d6cfdbe5832ba4ad851edf5a3520416" Oct 11 11:07:43.474559 master-2 kubenswrapper[4776]: I1011 11:07:43.474494 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lkxll"] Oct 11 11:07:43.480700 master-2 kubenswrapper[4776]: I1011 11:07:43.480639 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lkxll"] Oct 11 11:07:43.489536 master-2 kubenswrapper[4776]: I1011 11:07:43.489489 4776 scope.go:117] "RemoveContainer" containerID="771cf7d2c437cbd2b066674b2e8953bdd61419d156e2d9fd0d38039d3abbcdfa" Oct 11 11:07:44.070812 master-2 kubenswrapper[4776]: I1011 11:07:44.070767 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" path="/var/lib/kubelet/pods/d3740bfc-2abd-4b82-897e-ce53c4fa4324/volumes" Oct 11 11:10:04.786097 master-2 kubenswrapper[4776]: I1011 11:10:04.786017 4776 generic.go:334] "Generic (PLEG): container finished" podID="e0d073f2-1387-41f9-9e3d-71e1057293f9" containerID="b093ff59d085519739327a515a07ac3ab6962069ac919c6da772163834377b3f" exitCode=0 Oct 11 11:10:04.786097 master-2 kubenswrapper[4776]: I1011 11:10:04.786094 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" event={"ID":"e0d073f2-1387-41f9-9e3d-71e1057293f9","Type":"ContainerDied","Data":"b093ff59d085519739327a515a07ac3ab6962069ac919c6da772163834377b3f"} Oct 11 11:10:06.400377 master-2 kubenswrapper[4776]: I1011 11:10:06.400322 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:10:06.464915 master-2 kubenswrapper[4776]: I1011 11:10:06.464454 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-ssh-key\") pod \"e0d073f2-1387-41f9-9e3d-71e1057293f9\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " Oct 11 11:10:06.464915 master-2 kubenswrapper[4776]: I1011 11:10:06.464569 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-bootstrap-combined-ca-bundle\") pod \"e0d073f2-1387-41f9-9e3d-71e1057293f9\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " Oct 11 11:10:06.464915 master-2 kubenswrapper[4776]: I1011 11:10:06.464868 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fxlr\" (UniqueName: \"kubernetes.io/projected/e0d073f2-1387-41f9-9e3d-71e1057293f9-kube-api-access-4fxlr\") pod \"e0d073f2-1387-41f9-9e3d-71e1057293f9\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " Oct 11 11:10:06.465244 master-2 kubenswrapper[4776]: I1011 11:10:06.464995 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-inventory\") pod \"e0d073f2-1387-41f9-9e3d-71e1057293f9\" (UID: \"e0d073f2-1387-41f9-9e3d-71e1057293f9\") " Oct 11 11:10:06.468779 master-2 kubenswrapper[4776]: I1011 11:10:06.467865 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e0d073f2-1387-41f9-9e3d-71e1057293f9" (UID: "e0d073f2-1387-41f9-9e3d-71e1057293f9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:10:06.468779 master-2 kubenswrapper[4776]: I1011 11:10:06.468128 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d073f2-1387-41f9-9e3d-71e1057293f9-kube-api-access-4fxlr" (OuterVolumeSpecName: "kube-api-access-4fxlr") pod "e0d073f2-1387-41f9-9e3d-71e1057293f9" (UID: "e0d073f2-1387-41f9-9e3d-71e1057293f9"). InnerVolumeSpecName "kube-api-access-4fxlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:10:06.487362 master-2 kubenswrapper[4776]: I1011 11:10:06.487292 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e0d073f2-1387-41f9-9e3d-71e1057293f9" (UID: "e0d073f2-1387-41f9-9e3d-71e1057293f9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:10:06.493921 master-2 kubenswrapper[4776]: I1011 11:10:06.493880 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-inventory" (OuterVolumeSpecName: "inventory") pod "e0d073f2-1387-41f9-9e3d-71e1057293f9" (UID: "e0d073f2-1387-41f9-9e3d-71e1057293f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:10:06.568915 master-2 kubenswrapper[4776]: I1011 11:10:06.568844 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:10:06.568915 master-2 kubenswrapper[4776]: I1011 11:10:06.568920 4776 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-bootstrap-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:10:06.569229 master-2 kubenswrapper[4776]: I1011 11:10:06.568934 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fxlr\" (UniqueName: \"kubernetes.io/projected/e0d073f2-1387-41f9-9e3d-71e1057293f9-kube-api-access-4fxlr\") on node \"master-2\" DevicePath \"\"" Oct 11 11:10:06.569229 master-2 kubenswrapper[4776]: I1011 11:10:06.568949 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d073f2-1387-41f9-9e3d-71e1057293f9-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:10:06.827260 master-2 kubenswrapper[4776]: I1011 11:10:06.827187 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" event={"ID":"e0d073f2-1387-41f9-9e3d-71e1057293f9","Type":"ContainerDied","Data":"edc4f9119e96838dd2383ba29e5f21de62558eb73ab2b4b283c2439ebc73cdd8"} Oct 11 11:10:06.827260 master-2 kubenswrapper[4776]: I1011 11:10:06.827245 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edc4f9119e96838dd2383ba29e5f21de62558eb73ab2b4b283c2439ebc73cdd8" Oct 11 11:10:06.827551 master-2 kubenswrapper[4776]: I1011 11:10:06.827284 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-networker-deploy-networkers-hpjcr" Oct 11 11:11:32.106240 master-2 kubenswrapper[4776]: I1011 11:11:32.106171 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-dataplane-edpm-qw76s"] Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: E1011 11:11:32.106580 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d073f2-1387-41f9-9e3d-71e1057293f9" containerName="bootstrap-networker-deploy-networkers" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: I1011 11:11:32.106598 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d073f2-1387-41f9-9e3d-71e1057293f9" containerName="bootstrap-networker-deploy-networkers" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: E1011 11:11:32.106610 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="registry-server" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: I1011 11:11:32.106619 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="registry-server" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: E1011 11:11:32.106646 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="extract-utilities" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: I1011 11:11:32.106656 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="extract-utilities" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: E1011 11:11:32.106697 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="extract-content" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: I1011 11:11:32.106706 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="extract-content" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: I1011 11:11:32.106896 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3740bfc-2abd-4b82-897e-ce53c4fa4324" containerName="registry-server" Oct 11 11:11:32.107046 master-2 kubenswrapper[4776]: I1011 11:11:32.106929 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d073f2-1387-41f9-9e3d-71e1057293f9" containerName="bootstrap-networker-deploy-networkers" Oct 11 11:11:32.107787 master-2 kubenswrapper[4776]: I1011 11:11:32.107750 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.110869 master-2 kubenswrapper[4776]: I1011 11:11:32.110280 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:11:32.110869 master-2 kubenswrapper[4776]: I1011 11:11:32.110306 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:11:32.111022 master-2 kubenswrapper[4776]: I1011 11:11:32.110955 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:11:32.153477 master-2 kubenswrapper[4776]: I1011 11:11:32.121259 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-dataplane-edpm-qw76s"] Oct 11 11:11:32.268909 master-2 kubenswrapper[4776]: I1011 11:11:32.265639 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-ssh-key\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.268909 master-2 kubenswrapper[4776]: I1011 11:11:32.266403 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwjj9\" (UniqueName: \"kubernetes.io/projected/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-kube-api-access-wwjj9\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.268909 master-2 kubenswrapper[4776]: I1011 11:11:32.266805 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-inventory\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.369414 master-2 kubenswrapper[4776]: I1011 11:11:32.369219 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-inventory\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.369414 master-2 kubenswrapper[4776]: I1011 11:11:32.369387 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-ssh-key\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.369414 master-2 kubenswrapper[4776]: I1011 11:11:32.369414 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwjj9\" (UniqueName: \"kubernetes.io/projected/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-kube-api-access-wwjj9\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.372929 master-2 kubenswrapper[4776]: I1011 11:11:32.372878 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-inventory\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.381733 master-2 kubenswrapper[4776]: I1011 11:11:32.373974 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-ssh-key\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.390550 master-2 kubenswrapper[4776]: I1011 11:11:32.390490 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwjj9\" (UniqueName: \"kubernetes.io/projected/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-kube-api-access-wwjj9\") pod \"validate-network-dataplane-edpm-qw76s\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:32.465445 master-2 kubenswrapper[4776]: I1011 11:11:32.465368 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:33.001880 master-2 kubenswrapper[4776]: I1011 11:11:33.001839 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-dataplane-edpm-qw76s"] Oct 11 11:11:33.003580 master-2 kubenswrapper[4776]: I1011 11:11:33.003541 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:11:33.620438 master-2 kubenswrapper[4776]: I1011 11:11:33.620377 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-dataplane-edpm-qw76s" event={"ID":"7d0d6c1e-ea6f-481f-976b-9668d70d0b12","Type":"ContainerStarted","Data":"1e568579c9a19177244c1a879b4e8845b9e8d161fb50ed43e44b349db3481f1b"} Oct 11 11:11:34.628889 master-2 kubenswrapper[4776]: I1011 11:11:34.628802 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-dataplane-edpm-qw76s" event={"ID":"7d0d6c1e-ea6f-481f-976b-9668d70d0b12","Type":"ContainerStarted","Data":"b9dba38d8519dbf93ea62185fc5465f76d79d37384f84f54450af50ba090a712"} Oct 11 11:11:34.655092 master-2 kubenswrapper[4776]: I1011 11:11:34.654987 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-dataplane-edpm-qw76s" podStartSLOduration=2.166466535 podStartE2EDuration="2.654969237s" podCreationTimestamp="2025-10-11 11:11:32 +0000 UTC" firstStartedPulling="2025-10-11 11:11:33.003504037 +0000 UTC m=+2727.787930746" lastFinishedPulling="2025-10-11 11:11:33.492006739 +0000 UTC m=+2728.276433448" observedRunningTime="2025-10-11 11:11:34.650728413 +0000 UTC m=+2729.435155112" watchObservedRunningTime="2025-10-11 11:11:34.654969237 +0000 UTC m=+2729.439395946" Oct 11 11:11:38.663411 master-2 kubenswrapper[4776]: I1011 11:11:38.663340 4776 generic.go:334] "Generic (PLEG): container finished" podID="7d0d6c1e-ea6f-481f-976b-9668d70d0b12" containerID="b9dba38d8519dbf93ea62185fc5465f76d79d37384f84f54450af50ba090a712" exitCode=0 Oct 11 11:11:38.663411 master-2 kubenswrapper[4776]: I1011 11:11:38.663396 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-dataplane-edpm-qw76s" event={"ID":"7d0d6c1e-ea6f-481f-976b-9668d70d0b12","Type":"ContainerDied","Data":"b9dba38d8519dbf93ea62185fc5465f76d79d37384f84f54450af50ba090a712"} Oct 11 11:11:40.244750 master-2 kubenswrapper[4776]: I1011 11:11:40.244633 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:40.332225 master-2 kubenswrapper[4776]: I1011 11:11:40.332162 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-inventory\") pod \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " Oct 11 11:11:40.332486 master-2 kubenswrapper[4776]: I1011 11:11:40.332242 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-ssh-key\") pod \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " Oct 11 11:11:40.332486 master-2 kubenswrapper[4776]: I1011 11:11:40.332264 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwjj9\" (UniqueName: \"kubernetes.io/projected/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-kube-api-access-wwjj9\") pod \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\" (UID: \"7d0d6c1e-ea6f-481f-976b-9668d70d0b12\") " Oct 11 11:11:40.335211 master-2 kubenswrapper[4776]: I1011 11:11:40.335161 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-kube-api-access-wwjj9" (OuterVolumeSpecName: "kube-api-access-wwjj9") pod "7d0d6c1e-ea6f-481f-976b-9668d70d0b12" (UID: "7d0d6c1e-ea6f-481f-976b-9668d70d0b12"). InnerVolumeSpecName "kube-api-access-wwjj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:11:40.354166 master-2 kubenswrapper[4776]: I1011 11:11:40.354100 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7d0d6c1e-ea6f-481f-976b-9668d70d0b12" (UID: "7d0d6c1e-ea6f-481f-976b-9668d70d0b12"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:11:40.361266 master-2 kubenswrapper[4776]: I1011 11:11:40.361202 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-inventory" (OuterVolumeSpecName: "inventory") pod "7d0d6c1e-ea6f-481f-976b-9668d70d0b12" (UID: "7d0d6c1e-ea6f-481f-976b-9668d70d0b12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:11:40.435047 master-2 kubenswrapper[4776]: I1011 11:11:40.434984 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:11:40.435047 master-2 kubenswrapper[4776]: I1011 11:11:40.435030 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:11:40.435047 master-2 kubenswrapper[4776]: I1011 11:11:40.435040 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwjj9\" (UniqueName: \"kubernetes.io/projected/7d0d6c1e-ea6f-481f-976b-9668d70d0b12-kube-api-access-wwjj9\") on node \"master-2\" DevicePath \"\"" Oct 11 11:11:40.688162 master-2 kubenswrapper[4776]: I1011 11:11:40.688020 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-dataplane-edpm-qw76s" event={"ID":"7d0d6c1e-ea6f-481f-976b-9668d70d0b12","Type":"ContainerDied","Data":"1e568579c9a19177244c1a879b4e8845b9e8d161fb50ed43e44b349db3481f1b"} Oct 11 11:11:40.688162 master-2 kubenswrapper[4776]: I1011 11:11:40.688067 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-dataplane-edpm-qw76s" Oct 11 11:11:40.688453 master-2 kubenswrapper[4776]: I1011 11:11:40.688068 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e568579c9a19177244c1a879b4e8845b9e8d161fb50ed43e44b349db3481f1b" Oct 11 11:11:40.813192 master-2 kubenswrapper[4776]: I1011 11:11:40.813130 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-dataplane-edpm-hmm68"] Oct 11 11:11:40.813580 master-2 kubenswrapper[4776]: E1011 11:11:40.813526 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0d6c1e-ea6f-481f-976b-9668d70d0b12" containerName="validate-network-dataplane-edpm" Oct 11 11:11:40.813580 master-2 kubenswrapper[4776]: I1011 11:11:40.813556 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0d6c1e-ea6f-481f-976b-9668d70d0b12" containerName="validate-network-dataplane-edpm" Oct 11 11:11:40.814029 master-2 kubenswrapper[4776]: I1011 11:11:40.813970 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0d6c1e-ea6f-481f-976b-9668d70d0b12" containerName="validate-network-dataplane-edpm" Oct 11 11:11:40.814870 master-2 kubenswrapper[4776]: I1011 11:11:40.814846 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:40.818241 master-2 kubenswrapper[4776]: I1011 11:11:40.818057 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:11:40.818241 master-2 kubenswrapper[4776]: I1011 11:11:40.818165 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:11:40.818370 master-2 kubenswrapper[4776]: I1011 11:11:40.818354 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:11:40.829346 master-2 kubenswrapper[4776]: I1011 11:11:40.829286 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-dataplane-edpm-hmm68"] Oct 11 11:11:40.945156 master-2 kubenswrapper[4776]: I1011 11:11:40.945007 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-ssh-key\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:40.945156 master-2 kubenswrapper[4776]: I1011 11:11:40.945076 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhwz4\" (UniqueName: \"kubernetes.io/projected/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-kube-api-access-jhwz4\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:40.945156 master-2 kubenswrapper[4776]: I1011 11:11:40.945149 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-inventory\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.048505 master-2 kubenswrapper[4776]: I1011 11:11:41.048439 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-ssh-key\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.048505 master-2 kubenswrapper[4776]: I1011 11:11:41.048503 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhwz4\" (UniqueName: \"kubernetes.io/projected/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-kube-api-access-jhwz4\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.048821 master-2 kubenswrapper[4776]: I1011 11:11:41.048548 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-inventory\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.061395 master-2 kubenswrapper[4776]: I1011 11:11:41.061330 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-ssh-key\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.061764 master-2 kubenswrapper[4776]: I1011 11:11:41.061726 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-inventory\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.078019 master-2 kubenswrapper[4776]: I1011 11:11:41.077964 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhwz4\" (UniqueName: \"kubernetes.io/projected/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-kube-api-access-jhwz4\") pod \"install-os-dataplane-edpm-hmm68\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.176968 master-2 kubenswrapper[4776]: I1011 11:11:41.176891 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:11:41.724308 master-2 kubenswrapper[4776]: I1011 11:11:41.724244 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-dataplane-edpm-hmm68"] Oct 11 11:11:41.729093 master-2 kubenswrapper[4776]: W1011 11:11:41.729050 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97a17fc9_19bc_4b4d_8fe0_640b5efbc992.slice/crio-4179debd1de70026429192221d8702d87e421e0b85ee1859e86d56b169b9667e WatchSource:0}: Error finding container 4179debd1de70026429192221d8702d87e421e0b85ee1859e86d56b169b9667e: Status 404 returned error can't find the container with id 4179debd1de70026429192221d8702d87e421e0b85ee1859e86d56b169b9667e Oct 11 11:11:42.719577 master-2 kubenswrapper[4776]: I1011 11:11:42.715895 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-dataplane-edpm-hmm68" event={"ID":"97a17fc9-19bc-4b4d-8fe0-640b5efbc992","Type":"ContainerStarted","Data":"c9ad99941256151e596732de62ab477a0600fb63d276d7be934ce39a219c4417"} Oct 11 11:11:42.719577 master-2 kubenswrapper[4776]: I1011 11:11:42.715944 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-dataplane-edpm-hmm68" event={"ID":"97a17fc9-19bc-4b4d-8fe0-640b5efbc992","Type":"ContainerStarted","Data":"4179debd1de70026429192221d8702d87e421e0b85ee1859e86d56b169b9667e"} Oct 11 11:11:42.756437 master-2 kubenswrapper[4776]: I1011 11:11:42.756310 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-dataplane-edpm-hmm68" podStartSLOduration=2.371756026 podStartE2EDuration="2.756291821s" podCreationTimestamp="2025-10-11 11:11:40 +0000 UTC" firstStartedPulling="2025-10-11 11:11:41.731845334 +0000 UTC m=+2736.516272043" lastFinishedPulling="2025-10-11 11:11:42.116381139 +0000 UTC m=+2736.900807838" observedRunningTime="2025-10-11 11:11:42.748728788 +0000 UTC m=+2737.533155497" watchObservedRunningTime="2025-10-11 11:11:42.756291821 +0000 UTC m=+2737.540718530" Oct 11 11:12:57.372863 master-2 kubenswrapper[4776]: I1011 11:12:57.372775 4776 generic.go:334] "Generic (PLEG): container finished" podID="97a17fc9-19bc-4b4d-8fe0-640b5efbc992" containerID="c9ad99941256151e596732de62ab477a0600fb63d276d7be934ce39a219c4417" exitCode=0 Oct 11 11:12:57.372863 master-2 kubenswrapper[4776]: I1011 11:12:57.372848 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-dataplane-edpm-hmm68" event={"ID":"97a17fc9-19bc-4b4d-8fe0-640b5efbc992","Type":"ContainerDied","Data":"c9ad99941256151e596732de62ab477a0600fb63d276d7be934ce39a219c4417"} Oct 11 11:12:58.879440 master-2 kubenswrapper[4776]: I1011 11:12:58.879394 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:12:58.980139 master-2 kubenswrapper[4776]: I1011 11:12:58.976489 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-ssh-key\") pod \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " Oct 11 11:12:58.980139 master-2 kubenswrapper[4776]: I1011 11:12:58.976617 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhwz4\" (UniqueName: \"kubernetes.io/projected/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-kube-api-access-jhwz4\") pod \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " Oct 11 11:12:58.980139 master-2 kubenswrapper[4776]: I1011 11:12:58.977781 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-inventory\") pod \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\" (UID: \"97a17fc9-19bc-4b4d-8fe0-640b5efbc992\") " Oct 11 11:12:58.983978 master-2 kubenswrapper[4776]: I1011 11:12:58.983933 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-kube-api-access-jhwz4" (OuterVolumeSpecName: "kube-api-access-jhwz4") pod "97a17fc9-19bc-4b4d-8fe0-640b5efbc992" (UID: "97a17fc9-19bc-4b4d-8fe0-640b5efbc992"). InnerVolumeSpecName "kube-api-access-jhwz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:12:59.002685 master-2 kubenswrapper[4776]: I1011 11:12:59.002613 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-inventory" (OuterVolumeSpecName: "inventory") pod "97a17fc9-19bc-4b4d-8fe0-640b5efbc992" (UID: "97a17fc9-19bc-4b4d-8fe0-640b5efbc992"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:12:59.007272 master-2 kubenswrapper[4776]: I1011 11:12:59.007219 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "97a17fc9-19bc-4b4d-8fe0-640b5efbc992" (UID: "97a17fc9-19bc-4b4d-8fe0-640b5efbc992"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:12:59.081588 master-2 kubenswrapper[4776]: I1011 11:12:59.081453 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:12:59.081588 master-2 kubenswrapper[4776]: I1011 11:12:59.081505 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhwz4\" (UniqueName: \"kubernetes.io/projected/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-kube-api-access-jhwz4\") on node \"master-2\" DevicePath \"\"" Oct 11 11:12:59.081588 master-2 kubenswrapper[4776]: I1011 11:12:59.081519 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a17fc9-19bc-4b4d-8fe0-640b5efbc992-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:12:59.394578 master-2 kubenswrapper[4776]: I1011 11:12:59.394421 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-dataplane-edpm-hmm68" event={"ID":"97a17fc9-19bc-4b4d-8fe0-640b5efbc992","Type":"ContainerDied","Data":"4179debd1de70026429192221d8702d87e421e0b85ee1859e86d56b169b9667e"} Oct 11 11:12:59.394578 master-2 kubenswrapper[4776]: I1011 11:12:59.394479 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4179debd1de70026429192221d8702d87e421e0b85ee1859e86d56b169b9667e" Oct 11 11:12:59.394578 master-2 kubenswrapper[4776]: I1011 11:12:59.394499 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-dataplane-edpm-hmm68" Oct 11 11:13:11.231785 master-2 kubenswrapper[4776]: I1011 11:13:11.231683 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-networker-deploy-networkers-7chrf"] Oct 11 11:13:11.232436 master-2 kubenswrapper[4776]: E1011 11:13:11.232064 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a17fc9-19bc-4b4d-8fe0-640b5efbc992" containerName="install-os-dataplane-edpm" Oct 11 11:13:11.232436 master-2 kubenswrapper[4776]: I1011 11:13:11.232081 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a17fc9-19bc-4b4d-8fe0-640b5efbc992" containerName="install-os-dataplane-edpm" Oct 11 11:13:11.232436 master-2 kubenswrapper[4776]: I1011 11:13:11.232285 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a17fc9-19bc-4b4d-8fe0-640b5efbc992" containerName="install-os-dataplane-edpm" Oct 11 11:13:11.232991 master-2 kubenswrapper[4776]: I1011 11:13:11.232971 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.235848 master-2 kubenswrapper[4776]: I1011 11:13:11.235808 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:13:11.236046 master-2 kubenswrapper[4776]: I1011 11:13:11.236018 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:13:11.236186 master-2 kubenswrapper[4776]: I1011 11:13:11.236164 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:13:11.250131 master-2 kubenswrapper[4776]: I1011 11:13:11.250074 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-networker-deploy-networkers-7chrf"] Oct 11 11:13:11.318439 master-2 kubenswrapper[4776]: I1011 11:13:11.318372 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-ssh-key\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.318645 master-2 kubenswrapper[4776]: I1011 11:13:11.318533 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j4js\" (UniqueName: \"kubernetes.io/projected/76c47213-ac0b-4317-8d04-c62782b350ca-kube-api-access-5j4js\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.318735 master-2 kubenswrapper[4776]: I1011 11:13:11.318669 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-inventory\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.420993 master-2 kubenswrapper[4776]: I1011 11:13:11.420919 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-inventory\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.420993 master-2 kubenswrapper[4776]: I1011 11:13:11.420996 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-ssh-key\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.421245 master-2 kubenswrapper[4776]: I1011 11:13:11.421071 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j4js\" (UniqueName: \"kubernetes.io/projected/76c47213-ac0b-4317-8d04-c62782b350ca-kube-api-access-5j4js\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.424081 master-2 kubenswrapper[4776]: I1011 11:13:11.424045 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-inventory\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.425408 master-2 kubenswrapper[4776]: I1011 11:13:11.425374 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-ssh-key\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.443897 master-2 kubenswrapper[4776]: I1011 11:13:11.443841 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j4js\" (UniqueName: \"kubernetes.io/projected/76c47213-ac0b-4317-8d04-c62782b350ca-kube-api-access-5j4js\") pod \"configure-os-networker-deploy-networkers-7chrf\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:11.549419 master-2 kubenswrapper[4776]: I1011 11:13:11.549285 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:13:12.074151 master-2 kubenswrapper[4776]: W1011 11:13:12.074101 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76c47213_ac0b_4317_8d04_c62782b350ca.slice/crio-d988d3aa7e418d77815e76f53fabe06a76d9deef23117ecee521dd14a195dad3 WatchSource:0}: Error finding container d988d3aa7e418d77815e76f53fabe06a76d9deef23117ecee521dd14a195dad3: Status 404 returned error can't find the container with id d988d3aa7e418d77815e76f53fabe06a76d9deef23117ecee521dd14a195dad3 Oct 11 11:13:12.074869 master-2 kubenswrapper[4776]: I1011 11:13:12.074832 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-networker-deploy-networkers-7chrf"] Oct 11 11:13:12.568072 master-2 kubenswrapper[4776]: I1011 11:13:12.567999 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-7chrf" event={"ID":"76c47213-ac0b-4317-8d04-c62782b350ca","Type":"ContainerStarted","Data":"d988d3aa7e418d77815e76f53fabe06a76d9deef23117ecee521dd14a195dad3"} Oct 11 11:13:13.575578 master-2 kubenswrapper[4776]: I1011 11:13:13.575504 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-7chrf" event={"ID":"76c47213-ac0b-4317-8d04-c62782b350ca","Type":"ContainerStarted","Data":"10f8d52bf4bd1629684d13c561de45c535476feb86dfb1a788c3f2556f22ab4c"} Oct 11 11:13:13.601058 master-2 kubenswrapper[4776]: I1011 11:13:13.600974 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-networker-deploy-networkers-7chrf" podStartSLOduration=2.212631386 podStartE2EDuration="2.600954073s" podCreationTimestamp="2025-10-11 11:13:11 +0000 UTC" firstStartedPulling="2025-10-11 11:13:12.076376419 +0000 UTC m=+2826.860803128" lastFinishedPulling="2025-10-11 11:13:12.464699106 +0000 UTC m=+2827.249125815" observedRunningTime="2025-10-11 11:13:13.600295606 +0000 UTC m=+2828.384722315" watchObservedRunningTime="2025-10-11 11:13:13.600954073 +0000 UTC m=+2828.385380782" Oct 11 11:14:07.038220 master-2 kubenswrapper[4776]: I1011 11:14:07.038139 4776 generic.go:334] "Generic (PLEG): container finished" podID="76c47213-ac0b-4317-8d04-c62782b350ca" containerID="10f8d52bf4bd1629684d13c561de45c535476feb86dfb1a788c3f2556f22ab4c" exitCode=2 Oct 11 11:14:07.038220 master-2 kubenswrapper[4776]: I1011 11:14:07.038194 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-7chrf" event={"ID":"76c47213-ac0b-4317-8d04-c62782b350ca","Type":"ContainerDied","Data":"10f8d52bf4bd1629684d13c561de45c535476feb86dfb1a788c3f2556f22ab4c"} Oct 11 11:14:08.571408 master-2 kubenswrapper[4776]: I1011 11:14:08.571303 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:14:08.721486 master-2 kubenswrapper[4776]: I1011 11:14:08.721418 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-inventory\") pod \"76c47213-ac0b-4317-8d04-c62782b350ca\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " Oct 11 11:14:08.721764 master-2 kubenswrapper[4776]: I1011 11:14:08.721734 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-ssh-key\") pod \"76c47213-ac0b-4317-8d04-c62782b350ca\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " Oct 11 11:14:08.721888 master-2 kubenswrapper[4776]: I1011 11:14:08.721843 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j4js\" (UniqueName: \"kubernetes.io/projected/76c47213-ac0b-4317-8d04-c62782b350ca-kube-api-access-5j4js\") pod \"76c47213-ac0b-4317-8d04-c62782b350ca\" (UID: \"76c47213-ac0b-4317-8d04-c62782b350ca\") " Oct 11 11:14:08.724919 master-2 kubenswrapper[4776]: I1011 11:14:08.724863 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c47213-ac0b-4317-8d04-c62782b350ca-kube-api-access-5j4js" (OuterVolumeSpecName: "kube-api-access-5j4js") pod "76c47213-ac0b-4317-8d04-c62782b350ca" (UID: "76c47213-ac0b-4317-8d04-c62782b350ca"). InnerVolumeSpecName "kube-api-access-5j4js". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:14:08.745774 master-2 kubenswrapper[4776]: I1011 11:14:08.742861 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "76c47213-ac0b-4317-8d04-c62782b350ca" (UID: "76c47213-ac0b-4317-8d04-c62782b350ca"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:14:08.752505 master-2 kubenswrapper[4776]: I1011 11:14:08.749645 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-inventory" (OuterVolumeSpecName: "inventory") pod "76c47213-ac0b-4317-8d04-c62782b350ca" (UID: "76c47213-ac0b-4317-8d04-c62782b350ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:14:08.823951 master-2 kubenswrapper[4776]: I1011 11:14:08.823895 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:14:08.823951 master-2 kubenswrapper[4776]: I1011 11:14:08.823934 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76c47213-ac0b-4317-8d04-c62782b350ca-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:14:08.823951 master-2 kubenswrapper[4776]: I1011 11:14:08.823943 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j4js\" (UniqueName: \"kubernetes.io/projected/76c47213-ac0b-4317-8d04-c62782b350ca-kube-api-access-5j4js\") on node \"master-2\" DevicePath \"\"" Oct 11 11:14:09.064508 master-2 kubenswrapper[4776]: I1011 11:14:09.064387 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-7chrf" event={"ID":"76c47213-ac0b-4317-8d04-c62782b350ca","Type":"ContainerDied","Data":"d988d3aa7e418d77815e76f53fabe06a76d9deef23117ecee521dd14a195dad3"} Oct 11 11:14:09.064508 master-2 kubenswrapper[4776]: I1011 11:14:09.064429 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d988d3aa7e418d77815e76f53fabe06a76d9deef23117ecee521dd14a195dad3" Oct 11 11:14:09.064871 master-2 kubenswrapper[4776]: I1011 11:14:09.064663 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-7chrf" Oct 11 11:14:16.036363 master-2 kubenswrapper[4776]: I1011 11:14:16.036283 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-networker-deploy-networkers-8p46m"] Oct 11 11:14:16.037192 master-2 kubenswrapper[4776]: E1011 11:14:16.036779 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c47213-ac0b-4317-8d04-c62782b350ca" containerName="configure-os-networker-deploy-networkers" Oct 11 11:14:16.037192 master-2 kubenswrapper[4776]: I1011 11:14:16.036797 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c47213-ac0b-4317-8d04-c62782b350ca" containerName="configure-os-networker-deploy-networkers" Oct 11 11:14:16.037192 master-2 kubenswrapper[4776]: I1011 11:14:16.037030 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c47213-ac0b-4317-8d04-c62782b350ca" containerName="configure-os-networker-deploy-networkers" Oct 11 11:14:16.037872 master-2 kubenswrapper[4776]: I1011 11:14:16.037838 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.043099 master-2 kubenswrapper[4776]: I1011 11:14:16.043036 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:14:16.043176 master-2 kubenswrapper[4776]: I1011 11:14:16.043145 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:14:16.045218 master-2 kubenswrapper[4776]: I1011 11:14:16.045188 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-networkers" Oct 11 11:14:16.076276 master-2 kubenswrapper[4776]: I1011 11:14:16.076218 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-networker-deploy-networkers-8p46m"] Oct 11 11:14:16.162383 master-2 kubenswrapper[4776]: I1011 11:14:16.162293 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-inventory\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.162628 master-2 kubenswrapper[4776]: I1011 11:14:16.162394 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-ssh-key\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.162628 master-2 kubenswrapper[4776]: I1011 11:14:16.162445 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmdcj\" (UniqueName: \"kubernetes.io/projected/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-kube-api-access-hmdcj\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.265012 master-2 kubenswrapper[4776]: I1011 11:14:16.264948 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-inventory\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.265228 master-2 kubenswrapper[4776]: I1011 11:14:16.265022 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-ssh-key\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.265228 master-2 kubenswrapper[4776]: I1011 11:14:16.265060 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmdcj\" (UniqueName: \"kubernetes.io/projected/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-kube-api-access-hmdcj\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.268461 master-2 kubenswrapper[4776]: I1011 11:14:16.268424 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-ssh-key\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.270027 master-2 kubenswrapper[4776]: I1011 11:14:16.269982 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-inventory\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.297580 master-2 kubenswrapper[4776]: I1011 11:14:16.297461 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmdcj\" (UniqueName: \"kubernetes.io/projected/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-kube-api-access-hmdcj\") pod \"configure-os-networker-deploy-networkers-8p46m\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.360660 master-2 kubenswrapper[4776]: I1011 11:14:16.360551 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:14:16.972873 master-2 kubenswrapper[4776]: I1011 11:14:16.972804 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-networker-deploy-networkers-8p46m"] Oct 11 11:14:17.127393 master-2 kubenswrapper[4776]: I1011 11:14:17.127224 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-8p46m" event={"ID":"0f11458d-e09c-4f36-bfbb-b2b0fe04642e","Type":"ContainerStarted","Data":"76511c6a1df793bb03ad64104874f546aa85facc461739b86f47db1c14ea4393"} Oct 11 11:14:18.136240 master-2 kubenswrapper[4776]: I1011 11:14:18.136163 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-8p46m" event={"ID":"0f11458d-e09c-4f36-bfbb-b2b0fe04642e","Type":"ContainerStarted","Data":"08de6f93d73a127c8f66d64af6bac3e09c14a749a69eab80aa886fdae1e5ae3f"} Oct 11 11:14:18.165298 master-2 kubenswrapper[4776]: I1011 11:14:18.165206 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-networker-deploy-networkers-8p46m" podStartSLOduration=1.743691896 podStartE2EDuration="2.16518972s" podCreationTimestamp="2025-10-11 11:14:16 +0000 UTC" firstStartedPulling="2025-10-11 11:14:16.972628032 +0000 UTC m=+2891.757054741" lastFinishedPulling="2025-10-11 11:14:17.394125846 +0000 UTC m=+2892.178552565" observedRunningTime="2025-10-11 11:14:18.162387484 +0000 UTC m=+2892.946814193" watchObservedRunningTime="2025-10-11 11:14:18.16518972 +0000 UTC m=+2892.949616429" Oct 11 11:15:08.580080 master-2 kubenswrapper[4776]: I1011 11:15:08.580008 4776 generic.go:334] "Generic (PLEG): container finished" podID="0f11458d-e09c-4f36-bfbb-b2b0fe04642e" containerID="08de6f93d73a127c8f66d64af6bac3e09c14a749a69eab80aa886fdae1e5ae3f" exitCode=0 Oct 11 11:15:08.580663 master-2 kubenswrapper[4776]: I1011 11:15:08.580087 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-8p46m" event={"ID":"0f11458d-e09c-4f36-bfbb-b2b0fe04642e","Type":"ContainerDied","Data":"08de6f93d73a127c8f66d64af6bac3e09c14a749a69eab80aa886fdae1e5ae3f"} Oct 11 11:15:10.574873 master-2 kubenswrapper[4776]: I1011 11:15:10.574835 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:15:10.599789 master-2 kubenswrapper[4776]: I1011 11:15:10.599741 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-networker-deploy-networkers-8p46m" event={"ID":"0f11458d-e09c-4f36-bfbb-b2b0fe04642e","Type":"ContainerDied","Data":"76511c6a1df793bb03ad64104874f546aa85facc461739b86f47db1c14ea4393"} Oct 11 11:15:10.599789 master-2 kubenswrapper[4776]: I1011 11:15:10.599794 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76511c6a1df793bb03ad64104874f546aa85facc461739b86f47db1c14ea4393" Oct 11 11:15:10.600038 master-2 kubenswrapper[4776]: I1011 11:15:10.599801 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-networker-deploy-networkers-8p46m" Oct 11 11:15:10.664711 master-2 kubenswrapper[4776]: I1011 11:15:10.664371 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-ssh-key\") pod \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " Oct 11 11:15:10.664711 master-2 kubenswrapper[4776]: I1011 11:15:10.664464 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmdcj\" (UniqueName: \"kubernetes.io/projected/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-kube-api-access-hmdcj\") pod \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " Oct 11 11:15:10.664711 master-2 kubenswrapper[4776]: I1011 11:15:10.664491 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-inventory\") pod \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\" (UID: \"0f11458d-e09c-4f36-bfbb-b2b0fe04642e\") " Oct 11 11:15:10.668253 master-2 kubenswrapper[4776]: I1011 11:15:10.668198 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-kube-api-access-hmdcj" (OuterVolumeSpecName: "kube-api-access-hmdcj") pod "0f11458d-e09c-4f36-bfbb-b2b0fe04642e" (UID: "0f11458d-e09c-4f36-bfbb-b2b0fe04642e"). InnerVolumeSpecName "kube-api-access-hmdcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:15:10.684632 master-2 kubenswrapper[4776]: I1011 11:15:10.684592 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0f11458d-e09c-4f36-bfbb-b2b0fe04642e" (UID: "0f11458d-e09c-4f36-bfbb-b2b0fe04642e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:10.687168 master-2 kubenswrapper[4776]: I1011 11:15:10.687125 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-inventory" (OuterVolumeSpecName: "inventory") pod "0f11458d-e09c-4f36-bfbb-b2b0fe04642e" (UID: "0f11458d-e09c-4f36-bfbb-b2b0fe04642e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:10.767044 master-2 kubenswrapper[4776]: I1011 11:15:10.766898 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:15:10.767044 master-2 kubenswrapper[4776]: I1011 11:15:10.766965 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmdcj\" (UniqueName: \"kubernetes.io/projected/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-kube-api-access-hmdcj\") on node \"master-2\" DevicePath \"\"" Oct 11 11:15:10.767044 master-2 kubenswrapper[4776]: I1011 11:15:10.766993 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f11458d-e09c-4f36-bfbb-b2b0fe04642e-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:15:10.861214 master-2 kubenswrapper[4776]: I1011 11:15:10.861155 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-networker-deploy-br44d"] Oct 11 11:15:10.861515 master-2 kubenswrapper[4776]: E1011 11:15:10.861496 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f11458d-e09c-4f36-bfbb-b2b0fe04642e" containerName="configure-os-networker-deploy-networkers" Oct 11 11:15:10.861515 master-2 kubenswrapper[4776]: I1011 11:15:10.861512 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f11458d-e09c-4f36-bfbb-b2b0fe04642e" containerName="configure-os-networker-deploy-networkers" Oct 11 11:15:10.861768 master-2 kubenswrapper[4776]: I1011 11:15:10.861748 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f11458d-e09c-4f36-bfbb-b2b0fe04642e" containerName="configure-os-networker-deploy-networkers" Oct 11 11:15:10.862526 master-2 kubenswrapper[4776]: I1011 11:15:10.862504 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:10.926840 master-2 kubenswrapper[4776]: I1011 11:15:10.926787 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-networker-deploy-br44d"] Oct 11 11:15:10.970171 master-2 kubenswrapper[4776]: I1011 11:15:10.970124 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-networkers\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-ssh-key-networkers\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:10.970510 master-2 kubenswrapper[4776]: I1011 11:15:10.970494 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k647\" (UniqueName: \"kubernetes.io/projected/8d499e5e-9473-4609-a496-3d6005471c60-kube-api-access-4k647\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:10.970597 master-2 kubenswrapper[4776]: I1011 11:15:10.970585 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-inventory-0\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.072357 master-2 kubenswrapper[4776]: I1011 11:15:11.072216 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k647\" (UniqueName: \"kubernetes.io/projected/8d499e5e-9473-4609-a496-3d6005471c60-kube-api-access-4k647\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.072357 master-2 kubenswrapper[4776]: I1011 11:15:11.072280 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-inventory-0\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.072738 master-2 kubenswrapper[4776]: I1011 11:15:11.072395 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-networkers\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-ssh-key-networkers\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.075886 master-2 kubenswrapper[4776]: I1011 11:15:11.075838 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-networkers\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-ssh-key-networkers\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.078059 master-2 kubenswrapper[4776]: I1011 11:15:11.078025 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-inventory-0\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.093337 master-2 kubenswrapper[4776]: I1011 11:15:11.093290 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k647\" (UniqueName: \"kubernetes.io/projected/8d499e5e-9473-4609-a496-3d6005471c60-kube-api-access-4k647\") pod \"ssh-known-hosts-networker-deploy-br44d\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:11.175826 master-2 kubenswrapper[4776]: I1011 11:15:11.175766 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:12.171404 master-2 kubenswrapper[4776]: W1011 11:15:12.171350 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d499e5e_9473_4609_a496_3d6005471c60.slice/crio-741b2c080c17d74c7755050ba6692dca3e95816e1c05b4b16fc9d28b439d9bbf WatchSource:0}: Error finding container 741b2c080c17d74c7755050ba6692dca3e95816e1c05b4b16fc9d28b439d9bbf: Status 404 returned error can't find the container with id 741b2c080c17d74c7755050ba6692dca3e95816e1c05b4b16fc9d28b439d9bbf Oct 11 11:15:12.172313 master-2 kubenswrapper[4776]: I1011 11:15:12.172259 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-networker-deploy-br44d"] Oct 11 11:15:12.614682 master-2 kubenswrapper[4776]: I1011 11:15:12.614528 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-networker-deploy-br44d" event={"ID":"8d499e5e-9473-4609-a496-3d6005471c60","Type":"ContainerStarted","Data":"741b2c080c17d74c7755050ba6692dca3e95816e1c05b4b16fc9d28b439d9bbf"} Oct 11 11:15:13.622704 master-2 kubenswrapper[4776]: I1011 11:15:13.621931 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-networker-deploy-br44d" event={"ID":"8d499e5e-9473-4609-a496-3d6005471c60","Type":"ContainerStarted","Data":"76b99fe0e29fff620b2a93c0143495b11deea46a199bc5df567001f7dd6881fa"} Oct 11 11:15:13.658156 master-2 kubenswrapper[4776]: I1011 11:15:13.658054 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-networker-deploy-br44d" podStartSLOduration=3.187304554 podStartE2EDuration="3.658033632s" podCreationTimestamp="2025-10-11 11:15:10 +0000 UTC" firstStartedPulling="2025-10-11 11:15:12.173389876 +0000 UTC m=+2946.957816585" lastFinishedPulling="2025-10-11 11:15:12.644118954 +0000 UTC m=+2947.428545663" observedRunningTime="2025-10-11 11:15:13.646329957 +0000 UTC m=+2948.430756676" watchObservedRunningTime="2025-10-11 11:15:13.658033632 +0000 UTC m=+2948.442460341" Oct 11 11:15:20.682881 master-2 kubenswrapper[4776]: I1011 11:15:20.682800 4776 generic.go:334] "Generic (PLEG): container finished" podID="8d499e5e-9473-4609-a496-3d6005471c60" containerID="76b99fe0e29fff620b2a93c0143495b11deea46a199bc5df567001f7dd6881fa" exitCode=0 Oct 11 11:15:20.682881 master-2 kubenswrapper[4776]: I1011 11:15:20.682875 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-networker-deploy-br44d" event={"ID":"8d499e5e-9473-4609-a496-3d6005471c60","Type":"ContainerDied","Data":"76b99fe0e29fff620b2a93c0143495b11deea46a199bc5df567001f7dd6881fa"} Oct 11 11:15:22.212122 master-2 kubenswrapper[4776]: I1011 11:15:22.212071 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:15:22.315771 master-2 kubenswrapper[4776]: I1011 11:15:22.315690 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k647\" (UniqueName: \"kubernetes.io/projected/8d499e5e-9473-4609-a496-3d6005471c60-kube-api-access-4k647\") pod \"8d499e5e-9473-4609-a496-3d6005471c60\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " Oct 11 11:15:22.316037 master-2 kubenswrapper[4776]: I1011 11:15:22.315888 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-inventory-0\") pod \"8d499e5e-9473-4609-a496-3d6005471c60\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " Oct 11 11:15:22.316037 master-2 kubenswrapper[4776]: I1011 11:15:22.315968 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-networkers\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-ssh-key-networkers\") pod \"8d499e5e-9473-4609-a496-3d6005471c60\" (UID: \"8d499e5e-9473-4609-a496-3d6005471c60\") " Oct 11 11:15:22.318727 master-2 kubenswrapper[4776]: I1011 11:15:22.318645 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d499e5e-9473-4609-a496-3d6005471c60-kube-api-access-4k647" (OuterVolumeSpecName: "kube-api-access-4k647") pod "8d499e5e-9473-4609-a496-3d6005471c60" (UID: "8d499e5e-9473-4609-a496-3d6005471c60"). InnerVolumeSpecName "kube-api-access-4k647". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:15:22.338383 master-2 kubenswrapper[4776]: I1011 11:15:22.338319 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8d499e5e-9473-4609-a496-3d6005471c60" (UID: "8d499e5e-9473-4609-a496-3d6005471c60"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:22.344097 master-2 kubenswrapper[4776]: I1011 11:15:22.344061 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-ssh-key-networkers" (OuterVolumeSpecName: "ssh-key-networkers") pod "8d499e5e-9473-4609-a496-3d6005471c60" (UID: "8d499e5e-9473-4609-a496-3d6005471c60"). InnerVolumeSpecName "ssh-key-networkers". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:15:22.418045 master-2 kubenswrapper[4776]: I1011 11:15:22.417975 4776 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-inventory-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:15:22.418045 master-2 kubenswrapper[4776]: I1011 11:15:22.418028 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key-networkers\" (UniqueName: \"kubernetes.io/secret/8d499e5e-9473-4609-a496-3d6005471c60-ssh-key-networkers\") on node \"master-2\" DevicePath \"\"" Oct 11 11:15:22.418045 master-2 kubenswrapper[4776]: I1011 11:15:22.418044 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k647\" (UniqueName: \"kubernetes.io/projected/8d499e5e-9473-4609-a496-3d6005471c60-kube-api-access-4k647\") on node \"master-2\" DevicePath \"\"" Oct 11 11:15:22.706214 master-2 kubenswrapper[4776]: I1011 11:15:22.706102 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-networker-deploy-br44d" event={"ID":"8d499e5e-9473-4609-a496-3d6005471c60","Type":"ContainerDied","Data":"741b2c080c17d74c7755050ba6692dca3e95816e1c05b4b16fc9d28b439d9bbf"} Oct 11 11:15:22.706214 master-2 kubenswrapper[4776]: I1011 11:15:22.706143 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="741b2c080c17d74c7755050ba6692dca3e95816e1c05b4b16fc9d28b439d9bbf" Oct 11 11:15:22.706214 master-2 kubenswrapper[4776]: I1011 11:15:22.706154 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-networker-deploy-br44d" Oct 11 11:16:40.740561 master-2 kubenswrapper[4776]: I1011 11:16:40.740436 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-dataplane-edpm-wqd9x"] Oct 11 11:16:40.741268 master-2 kubenswrapper[4776]: E1011 11:16:40.740908 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d499e5e-9473-4609-a496-3d6005471c60" containerName="ssh-known-hosts-networker-deploy" Oct 11 11:16:40.741268 master-2 kubenswrapper[4776]: I1011 11:16:40.740932 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d499e5e-9473-4609-a496-3d6005471c60" containerName="ssh-known-hosts-networker-deploy" Oct 11 11:16:40.741268 master-2 kubenswrapper[4776]: I1011 11:16:40.741168 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d499e5e-9473-4609-a496-3d6005471c60" containerName="ssh-known-hosts-networker-deploy" Oct 11 11:16:40.741997 master-2 kubenswrapper[4776]: I1011 11:16:40.741965 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.745730 master-2 kubenswrapper[4776]: I1011 11:16:40.745661 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"edpm-neutron-metadata-default-certs-0" Oct 11 11:16:40.746044 master-2 kubenswrapper[4776]: I1011 11:16:40.746013 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:16:40.746377 master-2 kubenswrapper[4776]: I1011 11:16:40.746326 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"edpm-ovn-default-certs-0" Oct 11 11:16:40.746437 master-2 kubenswrapper[4776]: I1011 11:16:40.746402 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"edpm-telemetry-default-certs-0" Oct 11 11:16:40.746497 master-2 kubenswrapper[4776]: I1011 11:16:40.746347 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"edpm-libvirt-default-certs-0" Oct 11 11:16:40.746611 master-2 kubenswrapper[4776]: I1011 11:16:40.746585 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:16:40.746685 master-2 kubenswrapper[4776]: I1011 11:16:40.746658 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:16:40.781318 master-2 kubenswrapper[4776]: I1011 11:16:40.761748 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-dataplane-edpm-wqd9x"] Oct 11 11:16:40.825261 master-2 kubenswrapper[4776]: I1011 11:16:40.825204 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-bootstrap-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825497 master-2 kubenswrapper[4776]: I1011 11:16:40.825278 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-libvirt-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825497 master-2 kubenswrapper[4776]: I1011 11:16:40.825304 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-telemetry-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825497 master-2 kubenswrapper[4776]: I1011 11:16:40.825382 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825497 master-2 kubenswrapper[4776]: I1011 11:16:40.825411 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-libvirt-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825497 master-2 kubenswrapper[4776]: I1011 11:16:40.825435 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-neutron-metadata-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825497 master-2 kubenswrapper[4776]: I1011 11:16:40.825464 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-inventory\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825701 master-2 kubenswrapper[4776]: I1011 11:16:40.825595 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ovn-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825748 master-2 kubenswrapper[4776]: I1011 11:16:40.825723 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-nova-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825782 master-2 kubenswrapper[4776]: I1011 11:16:40.825764 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv76g\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-kube-api-access-vv76g\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825818 master-2 kubenswrapper[4776]: I1011 11:16:40.825801 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ssh-key\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825933 master-2 kubenswrapper[4776]: I1011 11:16:40.825898 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-telemetry-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.825982 master-2 kubenswrapper[4776]: I1011 11:16:40.825954 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"edpm-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-ovn-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928140 master-2 kubenswrapper[4776]: I1011 11:16:40.928071 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-bootstrap-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928401 master-2 kubenswrapper[4776]: I1011 11:16:40.928184 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-libvirt-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928401 master-2 kubenswrapper[4776]: I1011 11:16:40.928219 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-telemetry-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928401 master-2 kubenswrapper[4776]: I1011 11:16:40.928293 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928401 master-2 kubenswrapper[4776]: I1011 11:16:40.928329 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-libvirt-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928401 master-2 kubenswrapper[4776]: I1011 11:16:40.928364 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-neutron-metadata-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928620 master-2 kubenswrapper[4776]: I1011 11:16:40.928409 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-inventory\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928620 master-2 kubenswrapper[4776]: I1011 11:16:40.928462 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ovn-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928620 master-2 kubenswrapper[4776]: I1011 11:16:40.928521 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-nova-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928620 master-2 kubenswrapper[4776]: I1011 11:16:40.928550 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv76g\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-kube-api-access-vv76g\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928620 master-2 kubenswrapper[4776]: I1011 11:16:40.928585 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ssh-key\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928905 master-2 kubenswrapper[4776]: I1011 11:16:40.928728 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-telemetry-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.928905 master-2 kubenswrapper[4776]: I1011 11:16:40.928770 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"edpm-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-ovn-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.933308 master-2 kubenswrapper[4776]: I1011 11:16:40.931694 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-inventory\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.933308 master-2 kubenswrapper[4776]: I1011 11:16:40.931955 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-telemetry-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.933308 master-2 kubenswrapper[4776]: I1011 11:16:40.932874 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-libvirt-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.934019 master-2 kubenswrapper[4776]: I1011 11:16:40.933667 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-bootstrap-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.934721 master-2 kubenswrapper[4776]: I1011 11:16:40.934610 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ovn-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.934917 master-2 kubenswrapper[4776]: I1011 11:16:40.934771 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.936284 master-2 kubenswrapper[4776]: I1011 11:16:40.936252 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-libvirt-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.936541 master-2 kubenswrapper[4776]: I1011 11:16:40.936511 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ssh-key\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.936592 master-2 kubenswrapper[4776]: I1011 11:16:40.936563 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-telemetry-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.937151 master-2 kubenswrapper[4776]: I1011 11:16:40.937109 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-nova-combined-ca-bundle\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.937421 master-2 kubenswrapper[4776]: I1011 11:16:40.937388 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-neutron-metadata-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.938969 master-2 kubenswrapper[4776]: I1011 11:16:40.938931 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"edpm-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-ovn-default-certs-0\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:40.949937 master-2 kubenswrapper[4776]: I1011 11:16:40.949900 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv76g\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-kube-api-access-vv76g\") pod \"install-certs-dataplane-edpm-wqd9x\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:41.101787 master-2 kubenswrapper[4776]: I1011 11:16:41.101582 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:16:41.660745 master-2 kubenswrapper[4776]: I1011 11:16:41.660631 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-dataplane-edpm-wqd9x"] Oct 11 11:16:41.663983 master-2 kubenswrapper[4776]: W1011 11:16:41.663882 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80228f17_5924_456b_8353_45c055831ed5.slice/crio-47dcdac25ed44c34bcf447eede90133e4d3a388caf728ccbffa21a56003e9abe WatchSource:0}: Error finding container 47dcdac25ed44c34bcf447eede90133e4d3a388caf728ccbffa21a56003e9abe: Status 404 returned error can't find the container with id 47dcdac25ed44c34bcf447eede90133e4d3a388caf728ccbffa21a56003e9abe Oct 11 11:16:41.665952 master-2 kubenswrapper[4776]: I1011 11:16:41.665915 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:16:42.495581 master-2 kubenswrapper[4776]: I1011 11:16:42.495487 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-dataplane-edpm-wqd9x" event={"ID":"80228f17-5924-456b-8353-45c055831ed5","Type":"ContainerStarted","Data":"ed4a5ac26e0f995190c896a5b973a39f209d36b8cc703d2afed5b4eb99b93995"} Oct 11 11:16:42.495581 master-2 kubenswrapper[4776]: I1011 11:16:42.495565 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-dataplane-edpm-wqd9x" event={"ID":"80228f17-5924-456b-8353-45c055831ed5","Type":"ContainerStarted","Data":"47dcdac25ed44c34bcf447eede90133e4d3a388caf728ccbffa21a56003e9abe"} Oct 11 11:16:42.528901 master-2 kubenswrapper[4776]: I1011 11:16:42.527873 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-dataplane-edpm-wqd9x" podStartSLOduration=2.064192651 podStartE2EDuration="2.527851358s" podCreationTimestamp="2025-10-11 11:16:40 +0000 UTC" firstStartedPulling="2025-10-11 11:16:41.665815083 +0000 UTC m=+3036.450241792" lastFinishedPulling="2025-10-11 11:16:42.12947379 +0000 UTC m=+3036.913900499" observedRunningTime="2025-10-11 11:16:42.524346893 +0000 UTC m=+3037.308773602" watchObservedRunningTime="2025-10-11 11:16:42.527851358 +0000 UTC m=+3037.312278077" Oct 11 11:17:15.762015 master-2 kubenswrapper[4776]: I1011 11:17:15.761952 4776 generic.go:334] "Generic (PLEG): container finished" podID="80228f17-5924-456b-8353-45c055831ed5" containerID="ed4a5ac26e0f995190c896a5b973a39f209d36b8cc703d2afed5b4eb99b93995" exitCode=0 Oct 11 11:17:15.762015 master-2 kubenswrapper[4776]: I1011 11:17:15.762014 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-dataplane-edpm-wqd9x" event={"ID":"80228f17-5924-456b-8353-45c055831ed5","Type":"ContainerDied","Data":"ed4a5ac26e0f995190c896a5b973a39f209d36b8cc703d2afed5b4eb99b93995"} Oct 11 11:17:17.255691 master-2 kubenswrapper[4776]: I1011 11:17:17.255639 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:17:17.429844 master-2 kubenswrapper[4776]: I1011 11:17:17.429346 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-telemetry-default-certs-0\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.429844 master-2 kubenswrapper[4776]: I1011 11:17:17.429440 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-libvirt-default-certs-0\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.429844 master-2 kubenswrapper[4776]: I1011 11:17:17.429506 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-neutron-metadata-default-certs-0\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.429844 master-2 kubenswrapper[4776]: I1011 11:17:17.429552 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-neutron-metadata-combined-ca-bundle\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.429844 master-2 kubenswrapper[4776]: I1011 11:17:17.429655 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"edpm-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-ovn-default-certs-0\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.429844 master-2 kubenswrapper[4776]: I1011 11:17:17.429730 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-inventory\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.429828 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-nova-combined-ca-bundle\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.429968 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ssh-key\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.430004 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ovn-combined-ca-bundle\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.430040 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv76g\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-kube-api-access-vv76g\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.430080 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-telemetry-combined-ca-bundle\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.430117 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-bootstrap-combined-ca-bundle\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.430294 master-2 kubenswrapper[4776]: I1011 11:17:17.430166 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-libvirt-combined-ca-bundle\") pod \"80228f17-5924-456b-8353-45c055831ed5\" (UID: \"80228f17-5924-456b-8353-45c055831ed5\") " Oct 11 11:17:17.433493 master-2 kubenswrapper[4776]: I1011 11:17:17.433292 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-telemetry-default-certs-0" (OuterVolumeSpecName: "edpm-telemetry-default-certs-0") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "edpm-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:17.433594 master-2 kubenswrapper[4776]: I1011 11:17:17.433513 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-ovn-default-certs-0" (OuterVolumeSpecName: "edpm-ovn-default-certs-0") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "edpm-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:17.433648 master-2 kubenswrapper[4776]: I1011 11:17:17.433608 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "edpm-neutron-metadata-default-certs-0") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "edpm-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:17.434228 master-2 kubenswrapper[4776]: I1011 11:17:17.434172 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.434501 master-2 kubenswrapper[4776]: I1011 11:17:17.434377 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.435028 master-2 kubenswrapper[4776]: I1011 11:17:17.434980 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.435178 master-2 kubenswrapper[4776]: I1011 11:17:17.435125 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.435600 master-2 kubenswrapper[4776]: I1011 11:17:17.435486 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.436118 master-2 kubenswrapper[4776]: I1011 11:17:17.436071 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.437236 master-2 kubenswrapper[4776]: I1011 11:17:17.437204 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-libvirt-default-certs-0" (OuterVolumeSpecName: "edpm-libvirt-default-certs-0") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "edpm-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:17.439922 master-2 kubenswrapper[4776]: I1011 11:17:17.439873 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-kube-api-access-vv76g" (OuterVolumeSpecName: "kube-api-access-vv76g") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "kube-api-access-vv76g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:17:17.457398 master-2 kubenswrapper[4776]: I1011 11:17:17.457315 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-inventory" (OuterVolumeSpecName: "inventory") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.470391 master-2 kubenswrapper[4776]: I1011 11:17:17.470325 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "80228f17-5924-456b-8353-45c055831ed5" (UID: "80228f17-5924-456b-8353-45c055831ed5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:17:17.532177 master-2 kubenswrapper[4776]: I1011 11:17:17.532124 4776 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-libvirt-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532177 master-2 kubenswrapper[4776]: I1011 11:17:17.532162 4776 reconciler_common.go:293] "Volume detached for volume \"edpm-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-telemetry-default-certs-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532177 master-2 kubenswrapper[4776]: I1011 11:17:17.532174 4776 reconciler_common.go:293] "Volume detached for volume \"edpm-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-libvirt-default-certs-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532177 master-2 kubenswrapper[4776]: I1011 11:17:17.532184 4776 reconciler_common.go:293] "Volume detached for volume \"edpm-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-neutron-metadata-default-certs-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532194 4776 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-neutron-metadata-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532203 4776 reconciler_common.go:293] "Volume detached for volume \"edpm-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-edpm-ovn-default-certs-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532213 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532222 4776 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-nova-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532230 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532238 4776 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-ovn-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532246 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv76g\" (UniqueName: \"kubernetes.io/projected/80228f17-5924-456b-8353-45c055831ed5-kube-api-access-vv76g\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532254 4776 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-telemetry-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.532493 master-2 kubenswrapper[4776]: I1011 11:17:17.532263 4776 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80228f17-5924-456b-8353-45c055831ed5-bootstrap-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:17:17.779128 master-2 kubenswrapper[4776]: I1011 11:17:17.778989 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-dataplane-edpm-wqd9x" event={"ID":"80228f17-5924-456b-8353-45c055831ed5","Type":"ContainerDied","Data":"47dcdac25ed44c34bcf447eede90133e4d3a388caf728ccbffa21a56003e9abe"} Oct 11 11:17:17.779128 master-2 kubenswrapper[4776]: I1011 11:17:17.779058 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47dcdac25ed44c34bcf447eede90133e4d3a388caf728ccbffa21a56003e9abe" Oct 11 11:17:17.779128 master-2 kubenswrapper[4776]: I1011 11:17:17.779068 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-dataplane-edpm-wqd9x" Oct 11 11:17:17.930652 master-2 kubenswrapper[4776]: I1011 11:17:17.930573 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-dataplane-edpm-rgmsv"] Oct 11 11:17:17.935053 master-2 kubenswrapper[4776]: E1011 11:17:17.935009 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80228f17-5924-456b-8353-45c055831ed5" containerName="install-certs-dataplane-edpm" Oct 11 11:17:17.935053 master-2 kubenswrapper[4776]: I1011 11:17:17.935046 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="80228f17-5924-456b-8353-45c055831ed5" containerName="install-certs-dataplane-edpm" Oct 11 11:17:17.935301 master-2 kubenswrapper[4776]: I1011 11:17:17.935237 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="80228f17-5924-456b-8353-45c055831ed5" containerName="install-certs-dataplane-edpm" Oct 11 11:17:17.935924 master-2 kubenswrapper[4776]: I1011 11:17:17.935901 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:17.942696 master-2 kubenswrapper[4776]: I1011 11:17:17.941436 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 11:17:17.942696 master-2 kubenswrapper[4776]: I1011 11:17:17.941485 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 11:17:17.942696 master-2 kubenswrapper[4776]: I1011 11:17:17.941444 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 11 11:17:17.945092 master-2 kubenswrapper[4776]: I1011 11:17:17.943923 4776 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-edpm" Oct 11 11:17:17.957699 master-2 kubenswrapper[4776]: I1011 11:17:17.956795 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-dataplane-edpm-rgmsv"] Oct 11 11:17:18.039856 master-2 kubenswrapper[4776]: I1011 11:17:18.039705 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ovn-combined-ca-bundle\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.039856 master-2 kubenswrapper[4776]: I1011 11:17:18.039774 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ae3c981-e8c9-488a-94fb-91368f17324a-ovncontroller-config-0\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.040443 master-2 kubenswrapper[4776]: I1011 11:17:18.040391 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp57w\" (UniqueName: \"kubernetes.io/projected/0ae3c981-e8c9-488a-94fb-91368f17324a-kube-api-access-dp57w\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.040520 master-2 kubenswrapper[4776]: I1011 11:17:18.040493 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ssh-key\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.040560 master-2 kubenswrapper[4776]: I1011 11:17:18.040528 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-inventory\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.142279 master-2 kubenswrapper[4776]: I1011 11:17:18.142206 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ovn-combined-ca-bundle\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.142279 master-2 kubenswrapper[4776]: I1011 11:17:18.142265 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ae3c981-e8c9-488a-94fb-91368f17324a-ovncontroller-config-0\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.142279 master-2 kubenswrapper[4776]: I1011 11:17:18.142298 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp57w\" (UniqueName: \"kubernetes.io/projected/0ae3c981-e8c9-488a-94fb-91368f17324a-kube-api-access-dp57w\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.142607 master-2 kubenswrapper[4776]: I1011 11:17:18.142321 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ssh-key\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.142607 master-2 kubenswrapper[4776]: I1011 11:17:18.142340 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-inventory\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.143988 master-2 kubenswrapper[4776]: I1011 11:17:18.143946 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ae3c981-e8c9-488a-94fb-91368f17324a-ovncontroller-config-0\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.145684 master-2 kubenswrapper[4776]: I1011 11:17:18.145621 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ovn-combined-ca-bundle\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.146514 master-2 kubenswrapper[4776]: I1011 11:17:18.146478 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ssh-key\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.146657 master-2 kubenswrapper[4776]: I1011 11:17:18.146593 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-inventory\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.171856 master-2 kubenswrapper[4776]: I1011 11:17:18.171806 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp57w\" (UniqueName: \"kubernetes.io/projected/0ae3c981-e8c9-488a-94fb-91368f17324a-kube-api-access-dp57w\") pod \"ovn-dataplane-edpm-rgmsv\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.303319 master-2 kubenswrapper[4776]: I1011 11:17:18.303197 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:17:18.899579 master-2 kubenswrapper[4776]: I1011 11:17:18.899511 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-dataplane-edpm-rgmsv"] Oct 11 11:17:18.903005 master-2 kubenswrapper[4776]: W1011 11:17:18.902960 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ae3c981_e8c9_488a_94fb_91368f17324a.slice/crio-21d463e1e629c9491368b651284d741c90df52fbbbfb81e125c5bc9727efc4db WatchSource:0}: Error finding container 21d463e1e629c9491368b651284d741c90df52fbbbfb81e125c5bc9727efc4db: Status 404 returned error can't find the container with id 21d463e1e629c9491368b651284d741c90df52fbbbfb81e125c5bc9727efc4db Oct 11 11:17:19.799112 master-2 kubenswrapper[4776]: I1011 11:17:19.799028 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-dataplane-edpm-rgmsv" event={"ID":"0ae3c981-e8c9-488a-94fb-91368f17324a","Type":"ContainerStarted","Data":"5b0b9c3038438449e0c72e9b96930aa5aa926416dfcd43c1f308b33a3bcb9e1f"} Oct 11 11:17:19.799112 master-2 kubenswrapper[4776]: I1011 11:17:19.799088 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-dataplane-edpm-rgmsv" event={"ID":"0ae3c981-e8c9-488a-94fb-91368f17324a","Type":"ContainerStarted","Data":"21d463e1e629c9491368b651284d741c90df52fbbbfb81e125c5bc9727efc4db"} Oct 11 11:17:19.834483 master-2 kubenswrapper[4776]: I1011 11:17:19.834332 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-dataplane-edpm-rgmsv" podStartSLOduration=2.37965694 podStartE2EDuration="2.834300455s" podCreationTimestamp="2025-10-11 11:17:17 +0000 UTC" firstStartedPulling="2025-10-11 11:17:18.905786368 +0000 UTC m=+3073.690213077" lastFinishedPulling="2025-10-11 11:17:19.360429883 +0000 UTC m=+3074.144856592" observedRunningTime="2025-10-11 11:17:19.820539694 +0000 UTC m=+3074.604966403" watchObservedRunningTime="2025-10-11 11:17:19.834300455 +0000 UTC m=+3074.618727204" Oct 11 11:18:29.464256 master-2 kubenswrapper[4776]: I1011 11:18:29.464185 4776 generic.go:334] "Generic (PLEG): container finished" podID="0ae3c981-e8c9-488a-94fb-91368f17324a" containerID="5b0b9c3038438449e0c72e9b96930aa5aa926416dfcd43c1f308b33a3bcb9e1f" exitCode=0 Oct 11 11:18:29.464256 master-2 kubenswrapper[4776]: I1011 11:18:29.464246 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-dataplane-edpm-rgmsv" event={"ID":"0ae3c981-e8c9-488a-94fb-91368f17324a","Type":"ContainerDied","Data":"5b0b9c3038438449e0c72e9b96930aa5aa926416dfcd43c1f308b33a3bcb9e1f"} Oct 11 11:18:31.002771 master-2 kubenswrapper[4776]: I1011 11:18:31.002708 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:18:31.038194 master-2 kubenswrapper[4776]: I1011 11:18:31.038120 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp57w\" (UniqueName: \"kubernetes.io/projected/0ae3c981-e8c9-488a-94fb-91368f17324a-kube-api-access-dp57w\") pod \"0ae3c981-e8c9-488a-94fb-91368f17324a\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " Oct 11 11:18:31.038393 master-2 kubenswrapper[4776]: I1011 11:18:31.038250 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ovn-combined-ca-bundle\") pod \"0ae3c981-e8c9-488a-94fb-91368f17324a\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " Oct 11 11:18:31.038438 master-2 kubenswrapper[4776]: I1011 11:18:31.038406 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ssh-key\") pod \"0ae3c981-e8c9-488a-94fb-91368f17324a\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " Oct 11 11:18:31.038479 master-2 kubenswrapper[4776]: I1011 11:18:31.038444 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ae3c981-e8c9-488a-94fb-91368f17324a-ovncontroller-config-0\") pod \"0ae3c981-e8c9-488a-94fb-91368f17324a\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " Oct 11 11:18:31.038517 master-2 kubenswrapper[4776]: I1011 11:18:31.038502 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-inventory\") pod \"0ae3c981-e8c9-488a-94fb-91368f17324a\" (UID: \"0ae3c981-e8c9-488a-94fb-91368f17324a\") " Oct 11 11:18:31.042839 master-2 kubenswrapper[4776]: I1011 11:18:31.042777 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0ae3c981-e8c9-488a-94fb-91368f17324a" (UID: "0ae3c981-e8c9-488a-94fb-91368f17324a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:18:31.043783 master-2 kubenswrapper[4776]: I1011 11:18:31.043733 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae3c981-e8c9-488a-94fb-91368f17324a-kube-api-access-dp57w" (OuterVolumeSpecName: "kube-api-access-dp57w") pod "0ae3c981-e8c9-488a-94fb-91368f17324a" (UID: "0ae3c981-e8c9-488a-94fb-91368f17324a"). InnerVolumeSpecName "kube-api-access-dp57w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:18:31.066487 master-2 kubenswrapper[4776]: I1011 11:18:31.066429 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-inventory" (OuterVolumeSpecName: "inventory") pod "0ae3c981-e8c9-488a-94fb-91368f17324a" (UID: "0ae3c981-e8c9-488a-94fb-91368f17324a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:18:31.067009 master-2 kubenswrapper[4776]: I1011 11:18:31.066962 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0ae3c981-e8c9-488a-94fb-91368f17324a" (UID: "0ae3c981-e8c9-488a-94fb-91368f17324a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 11:18:31.069055 master-2 kubenswrapper[4776]: I1011 11:18:31.068999 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ae3c981-e8c9-488a-94fb-91368f17324a-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0ae3c981-e8c9-488a-94fb-91368f17324a" (UID: "0ae3c981-e8c9-488a-94fb-91368f17324a"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 11:18:31.141410 master-2 kubenswrapper[4776]: I1011 11:18:31.140776 4776 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ssh-key\") on node \"master-2\" DevicePath \"\"" Oct 11 11:18:31.141410 master-2 kubenswrapper[4776]: I1011 11:18:31.140821 4776 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0ae3c981-e8c9-488a-94fb-91368f17324a-ovncontroller-config-0\") on node \"master-2\" DevicePath \"\"" Oct 11 11:18:31.141410 master-2 kubenswrapper[4776]: I1011 11:18:31.140832 4776 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-inventory\") on node \"master-2\" DevicePath \"\"" Oct 11 11:18:31.141410 master-2 kubenswrapper[4776]: I1011 11:18:31.140842 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp57w\" (UniqueName: \"kubernetes.io/projected/0ae3c981-e8c9-488a-94fb-91368f17324a-kube-api-access-dp57w\") on node \"master-2\" DevicePath \"\"" Oct 11 11:18:31.141410 master-2 kubenswrapper[4776]: I1011 11:18:31.140850 4776 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae3c981-e8c9-488a-94fb-91368f17324a-ovn-combined-ca-bundle\") on node \"master-2\" DevicePath \"\"" Oct 11 11:18:31.482748 master-2 kubenswrapper[4776]: I1011 11:18:31.480711 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-dataplane-edpm-rgmsv" event={"ID":"0ae3c981-e8c9-488a-94fb-91368f17324a","Type":"ContainerDied","Data":"21d463e1e629c9491368b651284d741c90df52fbbbfb81e125c5bc9727efc4db"} Oct 11 11:18:31.482748 master-2 kubenswrapper[4776]: I1011 11:18:31.480760 4776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21d463e1e629c9491368b651284d741c90df52fbbbfb81e125c5bc9727efc4db" Oct 11 11:18:31.482748 master-2 kubenswrapper[4776]: I1011 11:18:31.480739 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-dataplane-edpm-rgmsv" Oct 11 11:18:56.349239 master-2 kubenswrapper[4776]: I1011 11:18:56.349068 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qdcmh"] Oct 11 11:18:56.350285 master-2 kubenswrapper[4776]: E1011 11:18:56.349469 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae3c981-e8c9-488a-94fb-91368f17324a" containerName="ovn-dataplane-edpm" Oct 11 11:18:56.350285 master-2 kubenswrapper[4776]: I1011 11:18:56.349484 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae3c981-e8c9-488a-94fb-91368f17324a" containerName="ovn-dataplane-edpm" Oct 11 11:18:56.350285 master-2 kubenswrapper[4776]: I1011 11:18:56.349697 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae3c981-e8c9-488a-94fb-91368f17324a" containerName="ovn-dataplane-edpm" Oct 11 11:18:56.351035 master-2 kubenswrapper[4776]: I1011 11:18:56.350985 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.378752 master-2 kubenswrapper[4776]: I1011 11:18:56.373904 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdcmh"] Oct 11 11:18:56.496633 master-2 kubenswrapper[4776]: I1011 11:18:56.496561 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-catalog-content\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.496988 master-2 kubenswrapper[4776]: I1011 11:18:56.496743 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5zpt\" (UniqueName: \"kubernetes.io/projected/f5435850-e29f-489e-8534-a73e291e2ae7-kube-api-access-h5zpt\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.496988 master-2 kubenswrapper[4776]: I1011 11:18:56.496817 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-utilities\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.598411 master-2 kubenswrapper[4776]: I1011 11:18:56.598331 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-catalog-content\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.598696 master-2 kubenswrapper[4776]: I1011 11:18:56.598463 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5zpt\" (UniqueName: \"kubernetes.io/projected/f5435850-e29f-489e-8534-a73e291e2ae7-kube-api-access-h5zpt\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.598696 master-2 kubenswrapper[4776]: I1011 11:18:56.598531 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-utilities\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.599008 master-2 kubenswrapper[4776]: I1011 11:18:56.598972 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-catalog-content\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.599059 master-2 kubenswrapper[4776]: I1011 11:18:56.599012 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-utilities\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.624881 master-2 kubenswrapper[4776]: I1011 11:18:56.624781 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5zpt\" (UniqueName: \"kubernetes.io/projected/f5435850-e29f-489e-8534-a73e291e2ae7-kube-api-access-h5zpt\") pod \"certified-operators-qdcmh\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:56.699896 master-2 kubenswrapper[4776]: I1011 11:18:56.699835 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:18:57.187277 master-2 kubenswrapper[4776]: I1011 11:18:57.187206 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdcmh"] Oct 11 11:18:57.191450 master-2 kubenswrapper[4776]: W1011 11:18:57.191359 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5435850_e29f_489e_8534_a73e291e2ae7.slice/crio-cf04f038161f6e02149819e27e04edc6a1248a7ef7d84020426beed99d05e6a7 WatchSource:0}: Error finding container cf04f038161f6e02149819e27e04edc6a1248a7ef7d84020426beed99d05e6a7: Status 404 returned error can't find the container with id cf04f038161f6e02149819e27e04edc6a1248a7ef7d84020426beed99d05e6a7 Oct 11 11:18:57.729497 master-2 kubenswrapper[4776]: I1011 11:18:57.729420 4776 generic.go:334] "Generic (PLEG): container finished" podID="f5435850-e29f-489e-8534-a73e291e2ae7" containerID="4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de" exitCode=0 Oct 11 11:18:57.730069 master-2 kubenswrapper[4776]: I1011 11:18:57.729520 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdcmh" event={"ID":"f5435850-e29f-489e-8534-a73e291e2ae7","Type":"ContainerDied","Data":"4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de"} Oct 11 11:18:57.730069 master-2 kubenswrapper[4776]: I1011 11:18:57.729707 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdcmh" event={"ID":"f5435850-e29f-489e-8534-a73e291e2ae7","Type":"ContainerStarted","Data":"cf04f038161f6e02149819e27e04edc6a1248a7ef7d84020426beed99d05e6a7"} Oct 11 11:18:58.739170 master-2 kubenswrapper[4776]: I1011 11:18:58.739102 4776 generic.go:334] "Generic (PLEG): container finished" podID="f5435850-e29f-489e-8534-a73e291e2ae7" containerID="83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9" exitCode=0 Oct 11 11:18:58.739170 master-2 kubenswrapper[4776]: I1011 11:18:58.739156 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdcmh" event={"ID":"f5435850-e29f-489e-8534-a73e291e2ae7","Type":"ContainerDied","Data":"83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9"} Oct 11 11:18:59.749694 master-2 kubenswrapper[4776]: I1011 11:18:59.749612 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdcmh" event={"ID":"f5435850-e29f-489e-8534-a73e291e2ae7","Type":"ContainerStarted","Data":"3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9"} Oct 11 11:18:59.781510 master-2 kubenswrapper[4776]: I1011 11:18:59.781426 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qdcmh" podStartSLOduration=2.33312347 podStartE2EDuration="3.781408176s" podCreationTimestamp="2025-10-11 11:18:56 +0000 UTC" firstStartedPulling="2025-10-11 11:18:57.731905585 +0000 UTC m=+3172.516332294" lastFinishedPulling="2025-10-11 11:18:59.180190291 +0000 UTC m=+3173.964617000" observedRunningTime="2025-10-11 11:18:59.773651737 +0000 UTC m=+3174.558078446" watchObservedRunningTime="2025-10-11 11:18:59.781408176 +0000 UTC m=+3174.565834885" Oct 11 11:19:06.700753 master-2 kubenswrapper[4776]: I1011 11:19:06.700624 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:19:06.700753 master-2 kubenswrapper[4776]: I1011 11:19:06.700758 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:19:06.759754 master-2 kubenswrapper[4776]: I1011 11:19:06.759437 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:19:06.869476 master-2 kubenswrapper[4776]: I1011 11:19:06.869385 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:19:07.028241 master-2 kubenswrapper[4776]: I1011 11:19:07.028062 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdcmh"] Oct 11 11:19:08.830263 master-2 kubenswrapper[4776]: I1011 11:19:08.830187 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qdcmh" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="registry-server" containerID="cri-o://3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9" gracePeriod=2 Oct 11 11:19:09.414750 master-2 kubenswrapper[4776]: I1011 11:19:09.414653 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:19:09.489211 master-2 kubenswrapper[4776]: I1011 11:19:09.489148 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-utilities\") pod \"f5435850-e29f-489e-8534-a73e291e2ae7\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " Oct 11 11:19:09.489468 master-2 kubenswrapper[4776]: I1011 11:19:09.489246 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5zpt\" (UniqueName: \"kubernetes.io/projected/f5435850-e29f-489e-8534-a73e291e2ae7-kube-api-access-h5zpt\") pod \"f5435850-e29f-489e-8534-a73e291e2ae7\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " Oct 11 11:19:09.489468 master-2 kubenswrapper[4776]: I1011 11:19:09.489305 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-catalog-content\") pod \"f5435850-e29f-489e-8534-a73e291e2ae7\" (UID: \"f5435850-e29f-489e-8534-a73e291e2ae7\") " Oct 11 11:19:09.490298 master-2 kubenswrapper[4776]: I1011 11:19:09.490250 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-utilities" (OuterVolumeSpecName: "utilities") pod "f5435850-e29f-489e-8534-a73e291e2ae7" (UID: "f5435850-e29f-489e-8534-a73e291e2ae7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:19:09.492650 master-2 kubenswrapper[4776]: I1011 11:19:09.492586 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5435850-e29f-489e-8534-a73e291e2ae7-kube-api-access-h5zpt" (OuterVolumeSpecName: "kube-api-access-h5zpt") pod "f5435850-e29f-489e-8534-a73e291e2ae7" (UID: "f5435850-e29f-489e-8534-a73e291e2ae7"). InnerVolumeSpecName "kube-api-access-h5zpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:19:09.592844 master-2 kubenswrapper[4776]: I1011 11:19:09.592699 4776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-utilities\") on node \"master-2\" DevicePath \"\"" Oct 11 11:19:09.592844 master-2 kubenswrapper[4776]: I1011 11:19:09.592776 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5zpt\" (UniqueName: \"kubernetes.io/projected/f5435850-e29f-489e-8534-a73e291e2ae7-kube-api-access-h5zpt\") on node \"master-2\" DevicePath \"\"" Oct 11 11:19:09.699937 master-2 kubenswrapper[4776]: I1011 11:19:09.699863 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5435850-e29f-489e-8534-a73e291e2ae7" (UID: "f5435850-e29f-489e-8534-a73e291e2ae7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:19:09.796133 master-2 kubenswrapper[4776]: I1011 11:19:09.796060 4776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5435850-e29f-489e-8534-a73e291e2ae7-catalog-content\") on node \"master-2\" DevicePath \"\"" Oct 11 11:19:09.839592 master-2 kubenswrapper[4776]: I1011 11:19:09.839536 4776 generic.go:334] "Generic (PLEG): container finished" podID="f5435850-e29f-489e-8534-a73e291e2ae7" containerID="3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9" exitCode=0 Oct 11 11:19:09.839592 master-2 kubenswrapper[4776]: I1011 11:19:09.839593 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdcmh" event={"ID":"f5435850-e29f-489e-8534-a73e291e2ae7","Type":"ContainerDied","Data":"3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9"} Oct 11 11:19:09.840223 master-2 kubenswrapper[4776]: I1011 11:19:09.839611 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdcmh" Oct 11 11:19:09.840223 master-2 kubenswrapper[4776]: I1011 11:19:09.839641 4776 scope.go:117] "RemoveContainer" containerID="3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9" Oct 11 11:19:09.840223 master-2 kubenswrapper[4776]: I1011 11:19:09.839631 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdcmh" event={"ID":"f5435850-e29f-489e-8534-a73e291e2ae7","Type":"ContainerDied","Data":"cf04f038161f6e02149819e27e04edc6a1248a7ef7d84020426beed99d05e6a7"} Oct 11 11:19:09.859501 master-2 kubenswrapper[4776]: I1011 11:19:09.859408 4776 scope.go:117] "RemoveContainer" containerID="83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9" Oct 11 11:19:09.881155 master-2 kubenswrapper[4776]: I1011 11:19:09.880863 4776 scope.go:117] "RemoveContainer" containerID="4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de" Oct 11 11:19:09.886157 master-2 kubenswrapper[4776]: I1011 11:19:09.886098 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdcmh"] Oct 11 11:19:09.894530 master-2 kubenswrapper[4776]: I1011 11:19:09.894476 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qdcmh"] Oct 11 11:19:09.918191 master-2 kubenswrapper[4776]: I1011 11:19:09.918061 4776 scope.go:117] "RemoveContainer" containerID="3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9" Oct 11 11:19:09.918854 master-2 kubenswrapper[4776]: E1011 11:19:09.918805 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9\": container with ID starting with 3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9 not found: ID does not exist" containerID="3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9" Oct 11 11:19:09.918980 master-2 kubenswrapper[4776]: I1011 11:19:09.918862 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9"} err="failed to get container status \"3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9\": rpc error: code = NotFound desc = could not find container \"3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9\": container with ID starting with 3eb5308bb14fd7a71dea829c7e2db9d74d1a85069f57202dbd368df557cb67e9 not found: ID does not exist" Oct 11 11:19:09.918980 master-2 kubenswrapper[4776]: I1011 11:19:09.918883 4776 scope.go:117] "RemoveContainer" containerID="83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9" Oct 11 11:19:09.919306 master-2 kubenswrapper[4776]: E1011 11:19:09.919272 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9\": container with ID starting with 83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9 not found: ID does not exist" containerID="83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9" Oct 11 11:19:09.919306 master-2 kubenswrapper[4776]: I1011 11:19:09.919298 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9"} err="failed to get container status \"83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9\": rpc error: code = NotFound desc = could not find container \"83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9\": container with ID starting with 83fc6463286a961051ddd9f7358d73ea0c89d9a2607188bd8e40bdd606ab8ba9 not found: ID does not exist" Oct 11 11:19:09.919439 master-2 kubenswrapper[4776]: I1011 11:19:09.919310 4776 scope.go:117] "RemoveContainer" containerID="4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de" Oct 11 11:19:09.919643 master-2 kubenswrapper[4776]: E1011 11:19:09.919618 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de\": container with ID starting with 4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de not found: ID does not exist" containerID="4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de" Oct 11 11:19:09.919740 master-2 kubenswrapper[4776]: I1011 11:19:09.919646 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de"} err="failed to get container status \"4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de\": rpc error: code = NotFound desc = could not find container \"4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de\": container with ID starting with 4d72cee9650196c8d5f157f67566f854a9f9a3533b78185eedb4ea6e40d164de not found: ID does not exist" Oct 11 11:19:10.070758 master-2 kubenswrapper[4776]: I1011 11:19:10.070694 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" path="/var/lib/kubelet/pods/f5435850-e29f-489e-8534-a73e291e2ae7/volumes" Oct 11 11:29:56.301909 master-2 kubenswrapper[4776]: I1011 11:29:56.301552 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4z7g2"] Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: E1011 11:29:56.302005 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="extract-content" Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: I1011 11:29:56.302020 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="extract-content" Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: E1011 11:29:56.302039 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="registry-server" Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: I1011 11:29:56.302045 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="registry-server" Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: E1011 11:29:56.302072 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="extract-utilities" Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: I1011 11:29:56.302079 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="extract-utilities" Oct 11 11:29:56.302782 master-2 kubenswrapper[4776]: I1011 11:29:56.302270 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5435850-e29f-489e-8534-a73e291e2ae7" containerName="registry-server" Oct 11 11:29:56.303523 master-2 kubenswrapper[4776]: I1011 11:29:56.303497 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.326165 master-2 kubenswrapper[4776]: I1011 11:29:56.325940 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4z7g2"] Oct 11 11:29:56.426642 master-2 kubenswrapper[4776]: I1011 11:29:56.426584 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhbxb\" (UniqueName: \"kubernetes.io/projected/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-kube-api-access-xhbxb\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.426957 master-2 kubenswrapper[4776]: I1011 11:29:56.426944 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-catalog-content\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.427065 master-2 kubenswrapper[4776]: I1011 11:29:56.427052 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-utilities\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.529057 master-2 kubenswrapper[4776]: I1011 11:29:56.528988 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-catalog-content\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.529401 master-2 kubenswrapper[4776]: I1011 11:29:56.529377 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-utilities\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.529726 master-2 kubenswrapper[4776]: I1011 11:29:56.529704 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhbxb\" (UniqueName: \"kubernetes.io/projected/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-kube-api-access-xhbxb\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.530003 master-2 kubenswrapper[4776]: I1011 11:29:56.529927 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-catalog-content\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.530141 master-2 kubenswrapper[4776]: I1011 11:29:56.530093 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-utilities\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.552849 master-2 kubenswrapper[4776]: I1011 11:29:56.552271 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhbxb\" (UniqueName: \"kubernetes.io/projected/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-kube-api-access-xhbxb\") pod \"certified-operators-4z7g2\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:56.629730 master-2 kubenswrapper[4776]: I1011 11:29:56.629653 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:29:57.139060 master-2 kubenswrapper[4776]: I1011 11:29:57.137324 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4z7g2"] Oct 11 11:29:57.139060 master-2 kubenswrapper[4776]: W1011 11:29:57.137514 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffb7ba2d_b676_4dc0_8809_0613818a3ea6.slice/crio-64cb239022063cb12fe37c86218229ce4a24a0051517d1c5ccb388dad4b429cb WatchSource:0}: Error finding container 64cb239022063cb12fe37c86218229ce4a24a0051517d1c5ccb388dad4b429cb: Status 404 returned error can't find the container with id 64cb239022063cb12fe37c86218229ce4a24a0051517d1c5ccb388dad4b429cb Oct 11 11:29:57.977739 master-2 kubenswrapper[4776]: I1011 11:29:57.977620 4776 generic.go:334] "Generic (PLEG): container finished" podID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerID="a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed" exitCode=0 Oct 11 11:29:57.977739 master-2 kubenswrapper[4776]: I1011 11:29:57.977710 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z7g2" event={"ID":"ffb7ba2d-b676-4dc0-8809-0613818a3ea6","Type":"ContainerDied","Data":"a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed"} Oct 11 11:29:57.977739 master-2 kubenswrapper[4776]: I1011 11:29:57.977740 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z7g2" event={"ID":"ffb7ba2d-b676-4dc0-8809-0613818a3ea6","Type":"ContainerStarted","Data":"64cb239022063cb12fe37c86218229ce4a24a0051517d1c5ccb388dad4b429cb"} Oct 11 11:29:57.980976 master-2 kubenswrapper[4776]: I1011 11:29:57.980760 4776 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 11:29:58.988947 master-2 kubenswrapper[4776]: I1011 11:29:58.988874 4776 generic.go:334] "Generic (PLEG): container finished" podID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerID="3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d" exitCode=0 Oct 11 11:29:58.988947 master-2 kubenswrapper[4776]: I1011 11:29:58.988932 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z7g2" event={"ID":"ffb7ba2d-b676-4dc0-8809-0613818a3ea6","Type":"ContainerDied","Data":"3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d"} Oct 11 11:30:00.001540 master-2 kubenswrapper[4776]: I1011 11:30:00.001454 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z7g2" event={"ID":"ffb7ba2d-b676-4dc0-8809-0613818a3ea6","Type":"ContainerStarted","Data":"76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029"} Oct 11 11:30:00.031506 master-2 kubenswrapper[4776]: I1011 11:30:00.031397 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4z7g2" podStartSLOduration=2.5982209259999998 podStartE2EDuration="4.031376712s" podCreationTimestamp="2025-10-11 11:29:56 +0000 UTC" firstStartedPulling="2025-10-11 11:29:57.980691199 +0000 UTC m=+3832.765117908" lastFinishedPulling="2025-10-11 11:29:59.413846945 +0000 UTC m=+3834.198273694" observedRunningTime="2025-10-11 11:30:00.030076307 +0000 UTC m=+3834.814503026" watchObservedRunningTime="2025-10-11 11:30:00.031376712 +0000 UTC m=+3834.815803421" Oct 11 11:30:04.760757 master-2 kubenswrapper[4776]: I1011 11:30:04.760664 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv"] Oct 11 11:30:04.767801 master-2 kubenswrapper[4776]: I1011 11:30:04.767768 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv"] Oct 11 11:30:06.072204 master-2 kubenswrapper[4776]: I1011 11:30:06.072129 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4aea0e1-d6c8-4542-85c7-e46b945d61a0" path="/var/lib/kubelet/pods/a4aea0e1-d6c8-4542-85c7-e46b945d61a0/volumes" Oct 11 11:30:06.631135 master-2 kubenswrapper[4776]: I1011 11:30:06.630873 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:30:06.631135 master-2 kubenswrapper[4776]: I1011 11:30:06.630971 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:30:06.673066 master-2 kubenswrapper[4776]: I1011 11:30:06.673012 4776 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:30:07.124170 master-2 kubenswrapper[4776]: I1011 11:30:07.124116 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:30:07.211085 master-2 kubenswrapper[4776]: I1011 11:30:07.211011 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4z7g2"] Oct 11 11:30:09.087903 master-2 kubenswrapper[4776]: I1011 11:30:09.087816 4776 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4z7g2" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="registry-server" containerID="cri-o://76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029" gracePeriod=2 Oct 11 11:30:09.601182 master-2 kubenswrapper[4776]: I1011 11:30:09.601133 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:30:09.731998 master-2 kubenswrapper[4776]: I1011 11:30:09.731924 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-utilities\") pod \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " Oct 11 11:30:09.732243 master-2 kubenswrapper[4776]: I1011 11:30:09.732122 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-catalog-content\") pod \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " Oct 11 11:30:09.732243 master-2 kubenswrapper[4776]: I1011 11:30:09.732170 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhbxb\" (UniqueName: \"kubernetes.io/projected/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-kube-api-access-xhbxb\") pod \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\" (UID: \"ffb7ba2d-b676-4dc0-8809-0613818a3ea6\") " Oct 11 11:30:09.734446 master-2 kubenswrapper[4776]: I1011 11:30:09.734368 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-utilities" (OuterVolumeSpecName: "utilities") pod "ffb7ba2d-b676-4dc0-8809-0613818a3ea6" (UID: "ffb7ba2d-b676-4dc0-8809-0613818a3ea6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:30:09.735710 master-2 kubenswrapper[4776]: I1011 11:30:09.735620 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-kube-api-access-xhbxb" (OuterVolumeSpecName: "kube-api-access-xhbxb") pod "ffb7ba2d-b676-4dc0-8809-0613818a3ea6" (UID: "ffb7ba2d-b676-4dc0-8809-0613818a3ea6"). InnerVolumeSpecName "kube-api-access-xhbxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:30:09.779699 master-2 kubenswrapper[4776]: I1011 11:30:09.779561 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffb7ba2d-b676-4dc0-8809-0613818a3ea6" (UID: "ffb7ba2d-b676-4dc0-8809-0613818a3ea6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 11:30:09.835737 master-2 kubenswrapper[4776]: I1011 11:30:09.835620 4776 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-catalog-content\") on node \"master-2\" DevicePath \"\"" Oct 11 11:30:09.835737 master-2 kubenswrapper[4776]: I1011 11:30:09.835715 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhbxb\" (UniqueName: \"kubernetes.io/projected/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-kube-api-access-xhbxb\") on node \"master-2\" DevicePath \"\"" Oct 11 11:30:09.835737 master-2 kubenswrapper[4776]: I1011 11:30:09.835732 4776 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb7ba2d-b676-4dc0-8809-0613818a3ea6-utilities\") on node \"master-2\" DevicePath \"\"" Oct 11 11:30:10.101148 master-2 kubenswrapper[4776]: I1011 11:30:10.101080 4776 generic.go:334] "Generic (PLEG): container finished" podID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerID="76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029" exitCode=0 Oct 11 11:30:10.101148 master-2 kubenswrapper[4776]: I1011 11:30:10.101144 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z7g2" event={"ID":"ffb7ba2d-b676-4dc0-8809-0613818a3ea6","Type":"ContainerDied","Data":"76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029"} Oct 11 11:30:10.101787 master-2 kubenswrapper[4776]: I1011 11:30:10.101169 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4z7g2" Oct 11 11:30:10.101787 master-2 kubenswrapper[4776]: I1011 11:30:10.101198 4776 scope.go:117] "RemoveContainer" containerID="76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029" Oct 11 11:30:10.101787 master-2 kubenswrapper[4776]: I1011 11:30:10.101180 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4z7g2" event={"ID":"ffb7ba2d-b676-4dc0-8809-0613818a3ea6","Type":"ContainerDied","Data":"64cb239022063cb12fe37c86218229ce4a24a0051517d1c5ccb388dad4b429cb"} Oct 11 11:30:10.126760 master-2 kubenswrapper[4776]: I1011 11:30:10.126731 4776 scope.go:117] "RemoveContainer" containerID="3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d" Oct 11 11:30:10.148064 master-2 kubenswrapper[4776]: I1011 11:30:10.147318 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4z7g2"] Oct 11 11:30:10.157191 master-2 kubenswrapper[4776]: I1011 11:30:10.157140 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4z7g2"] Oct 11 11:30:10.162455 master-2 kubenswrapper[4776]: I1011 11:30:10.162420 4776 scope.go:117] "RemoveContainer" containerID="a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed" Oct 11 11:30:10.195933 master-2 kubenswrapper[4776]: I1011 11:30:10.195295 4776 scope.go:117] "RemoveContainer" containerID="76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029" Oct 11 11:30:10.196220 master-2 kubenswrapper[4776]: E1011 11:30:10.196187 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029\": container with ID starting with 76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029 not found: ID does not exist" containerID="76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029" Oct 11 11:30:10.196266 master-2 kubenswrapper[4776]: I1011 11:30:10.196218 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029"} err="failed to get container status \"76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029\": rpc error: code = NotFound desc = could not find container \"76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029\": container with ID starting with 76dfce7869b1daf767caceadadb6c9eae1abbc0e83c94937a79e74bef3c4b029 not found: ID does not exist" Oct 11 11:30:10.196266 master-2 kubenswrapper[4776]: I1011 11:30:10.196239 4776 scope.go:117] "RemoveContainer" containerID="3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d" Oct 11 11:30:10.196658 master-2 kubenswrapper[4776]: E1011 11:30:10.196622 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d\": container with ID starting with 3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d not found: ID does not exist" containerID="3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d" Oct 11 11:30:10.196721 master-2 kubenswrapper[4776]: I1011 11:30:10.196653 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d"} err="failed to get container status \"3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d\": rpc error: code = NotFound desc = could not find container \"3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d\": container with ID starting with 3a3c3a8467c3212881b6943b014e207a6900f90e9078ac699d7799b82bd1649d not found: ID does not exist" Oct 11 11:30:10.196721 master-2 kubenswrapper[4776]: I1011 11:30:10.196690 4776 scope.go:117] "RemoveContainer" containerID="a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed" Oct 11 11:30:10.196988 master-2 kubenswrapper[4776]: E1011 11:30:10.196957 4776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed\": container with ID starting with a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed not found: ID does not exist" containerID="a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed" Oct 11 11:30:10.197029 master-2 kubenswrapper[4776]: I1011 11:30:10.196986 4776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed"} err="failed to get container status \"a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed\": rpc error: code = NotFound desc = could not find container \"a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed\": container with ID starting with a0e0c80c9f567abc62f1423fa2f9aff4a61bd707db4da35c4e97791c676344ed not found: ID does not exist" Oct 11 11:30:12.074441 master-2 kubenswrapper[4776]: I1011 11:30:12.074301 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" path="/var/lib/kubelet/pods/ffb7ba2d-b676-4dc0-8809-0613818a3ea6/volumes" Oct 11 11:30:18.709020 master-2 kubenswrapper[4776]: I1011 11:30:18.708903 4776 scope.go:117] "RemoveContainer" containerID="84c5dbd5af0fe30b5900ca449a4f0411c4cff8ceb60f39636cf23f6147444fb3" Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: I1011 11:31:43.874447 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fg6sc/must-gather-ctrmz"] Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: E1011 11:31:43.877800 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="extract-content" Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: I1011 11:31:43.877835 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="extract-content" Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: E1011 11:31:43.877876 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="registry-server" Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: I1011 11:31:43.877884 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="registry-server" Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: E1011 11:31:43.877912 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="extract-utilities" Oct 11 11:31:43.878285 master-2 kubenswrapper[4776]: I1011 11:31:43.877921 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="extract-utilities" Oct 11 11:31:43.881444 master-2 kubenswrapper[4776]: I1011 11:31:43.880028 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb7ba2d-b676-4dc0-8809-0613818a3ea6" containerName="registry-server" Oct 11 11:31:43.884150 master-2 kubenswrapper[4776]: I1011 11:31:43.883794 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:43.886659 master-2 kubenswrapper[4776]: I1011 11:31:43.886335 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fg6sc"/"kube-root-ca.crt" Oct 11 11:31:43.886826 master-2 kubenswrapper[4776]: I1011 11:31:43.886769 4776 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fg6sc"/"openshift-service-ca.crt" Oct 11 11:31:43.888958 master-2 kubenswrapper[4776]: I1011 11:31:43.888442 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fg6sc/must-gather-ctrmz"] Oct 11 11:31:43.946177 master-2 kubenswrapper[4776]: I1011 11:31:43.946091 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca7d98dc-8efb-46d2-bb15-c769709ccb4c-must-gather-output\") pod \"must-gather-ctrmz\" (UID: \"ca7d98dc-8efb-46d2-bb15-c769709ccb4c\") " pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:43.946562 master-2 kubenswrapper[4776]: I1011 11:31:43.946532 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92vpm\" (UniqueName: \"kubernetes.io/projected/ca7d98dc-8efb-46d2-bb15-c769709ccb4c-kube-api-access-92vpm\") pod \"must-gather-ctrmz\" (UID: \"ca7d98dc-8efb-46d2-bb15-c769709ccb4c\") " pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:44.049509 master-2 kubenswrapper[4776]: I1011 11:31:44.049413 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca7d98dc-8efb-46d2-bb15-c769709ccb4c-must-gather-output\") pod \"must-gather-ctrmz\" (UID: \"ca7d98dc-8efb-46d2-bb15-c769709ccb4c\") " pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:44.049776 master-2 kubenswrapper[4776]: I1011 11:31:44.049562 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92vpm\" (UniqueName: \"kubernetes.io/projected/ca7d98dc-8efb-46d2-bb15-c769709ccb4c-kube-api-access-92vpm\") pod \"must-gather-ctrmz\" (UID: \"ca7d98dc-8efb-46d2-bb15-c769709ccb4c\") " pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:44.050066 master-2 kubenswrapper[4776]: I1011 11:31:44.050004 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca7d98dc-8efb-46d2-bb15-c769709ccb4c-must-gather-output\") pod \"must-gather-ctrmz\" (UID: \"ca7d98dc-8efb-46d2-bb15-c769709ccb4c\") " pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:44.079120 master-2 kubenswrapper[4776]: I1011 11:31:44.079039 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92vpm\" (UniqueName: \"kubernetes.io/projected/ca7d98dc-8efb-46d2-bb15-c769709ccb4c-kube-api-access-92vpm\") pod \"must-gather-ctrmz\" (UID: \"ca7d98dc-8efb-46d2-bb15-c769709ccb4c\") " pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:44.210698 master-2 kubenswrapper[4776]: I1011 11:31:44.208290 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/must-gather-ctrmz" Oct 11 11:31:44.650430 master-2 kubenswrapper[4776]: W1011 11:31:44.650378 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca7d98dc_8efb_46d2_bb15_c769709ccb4c.slice/crio-d56cf2965b0cc9e86a38e09f72c8d1a0a0f66c7472924ccbd80a66b2a8a36620 WatchSource:0}: Error finding container d56cf2965b0cc9e86a38e09f72c8d1a0a0f66c7472924ccbd80a66b2a8a36620: Status 404 returned error can't find the container with id d56cf2965b0cc9e86a38e09f72c8d1a0a0f66c7472924ccbd80a66b2a8a36620 Oct 11 11:31:44.651648 master-2 kubenswrapper[4776]: I1011 11:31:44.651614 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fg6sc/must-gather-ctrmz"] Oct 11 11:31:45.026914 master-2 kubenswrapper[4776]: I1011 11:31:45.026834 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/must-gather-ctrmz" event={"ID":"ca7d98dc-8efb-46d2-bb15-c769709ccb4c","Type":"ContainerStarted","Data":"d56cf2965b0cc9e86a38e09f72c8d1a0a0f66c7472924ccbd80a66b2a8a36620"} Oct 11 11:31:46.038967 master-2 kubenswrapper[4776]: I1011 11:31:46.038829 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/must-gather-ctrmz" event={"ID":"ca7d98dc-8efb-46d2-bb15-c769709ccb4c","Type":"ContainerStarted","Data":"1ea5e25caf43c046d8b5a7743dc0fe6110c578cd5aff0bd0e86ba0a81c064c43"} Oct 11 11:31:47.052202 master-2 kubenswrapper[4776]: I1011 11:31:47.052118 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/must-gather-ctrmz" event={"ID":"ca7d98dc-8efb-46d2-bb15-c769709ccb4c","Type":"ContainerStarted","Data":"bd8d93ddec39daef4e83da5e147bac7e76c85e6cea4c1db9ed9f14f7de0d1315"} Oct 11 11:31:47.084162 master-2 kubenswrapper[4776]: I1011 11:31:47.084038 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fg6sc/must-gather-ctrmz" podStartSLOduration=2.953526013 podStartE2EDuration="4.084005315s" podCreationTimestamp="2025-10-11 11:31:43 +0000 UTC" firstStartedPulling="2025-10-11 11:31:44.652535201 +0000 UTC m=+3939.436961910" lastFinishedPulling="2025-10-11 11:31:45.783014503 +0000 UTC m=+3940.567441212" observedRunningTime="2025-10-11 11:31:47.078346813 +0000 UTC m=+3941.862773522" watchObservedRunningTime="2025-10-11 11:31:47.084005315 +0000 UTC m=+3941.868432024" Oct 11 11:31:47.796452 master-2 kubenswrapper[4776]: I1011 11:31:47.796398 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-55bd67947c-tpbwx_b7b07707-84bd-43a6-a43d-6680decaa210/cluster-version-operator/0.log" Oct 11 11:31:51.436338 master-2 kubenswrapper[4776]: I1011 11:31:51.436289 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-p97jd_2df2fa0c-0a2c-41e1-b309-7fde96bdd8b2/nmstate-console-plugin/0.log" Oct 11 11:31:51.493634 master-2 kubenswrapper[4776]: I1011 11:31:51.493556 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-cwqqw_2fd87adc-6c4a-46cb-9fcc-cd35a48b1614/nmstate-handler/0.log" Oct 11 11:31:52.239061 master-2 kubenswrapper[4776]: I1011 11:31:52.239005 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-guard-master-2_9314095b-1661-46bd-8e19-2741d9d758fa/guard/0.log" Oct 11 11:31:52.514694 master-2 kubenswrapper[4776]: I1011 11:31:52.514601 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/controller/0.log" Oct 11 11:31:53.033112 master-2 kubenswrapper[4776]: I1011 11:31:53.033055 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcdctl/0.log" Oct 11 11:31:53.192415 master-2 kubenswrapper[4776]: I1011 11:31:53.192272 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcd/0.log" Oct 11 11:31:53.213576 master-2 kubenswrapper[4776]: I1011 11:31:53.213534 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcd-metrics/0.log" Oct 11 11:31:53.238754 master-2 kubenswrapper[4776]: I1011 11:31:53.238692 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcd-readyz/0.log" Oct 11 11:31:53.280561 master-2 kubenswrapper[4776]: I1011 11:31:53.280495 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcd-rev/0.log" Oct 11 11:31:53.305734 master-2 kubenswrapper[4776]: I1011 11:31:53.305650 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/setup/0.log" Oct 11 11:31:53.345780 master-2 kubenswrapper[4776]: I1011 11:31:53.344609 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcd-ensure-env-vars/0.log" Oct 11 11:31:53.378695 master-2 kubenswrapper[4776]: I1011 11:31:53.378366 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcd-resources-copy/0.log" Oct 11 11:31:53.587211 master-2 kubenswrapper[4776]: I1011 11:31:53.586256 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-10-master-2_56e683e1-6c74-4998-ac94-05f58a65965f/installer/0.log" Oct 11 11:31:53.722617 master-2 kubenswrapper[4776]: I1011 11:31:53.722457 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/frr/0.log" Oct 11 11:31:53.738261 master-2 kubenswrapper[4776]: I1011 11:31:53.736351 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/reloader/0.log" Oct 11 11:31:53.754764 master-2 kubenswrapper[4776]: I1011 11:31:53.754649 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_revision-pruner-10-master-2_c2320cb4-bf2c-4d63-b9c6-5a7461a547e8/pruner/0.log" Oct 11 11:31:53.756740 master-2 kubenswrapper[4776]: I1011 11:31:53.756721 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/frr-metrics/0.log" Oct 11 11:31:53.776904 master-2 kubenswrapper[4776]: I1011 11:31:53.775416 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/kube-rbac-proxy/0.log" Oct 11 11:31:53.816236 master-2 kubenswrapper[4776]: I1011 11:31:53.816188 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/kube-rbac-proxy-frr/0.log" Oct 11 11:31:53.849572 master-2 kubenswrapper[4776]: I1011 11:31:53.849494 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/cp-frr-files/0.log" Oct 11 11:31:53.875704 master-2 kubenswrapper[4776]: I1011 11:31:53.874948 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/cp-reloader/0.log" Oct 11 11:31:53.887921 master-2 kubenswrapper[4776]: I1011 11:31:53.887864 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/cp-metrics/0.log" Oct 11 11:31:54.165177 master-2 kubenswrapper[4776]: I1011 11:31:54.164495 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6fccd5ccc-lxq75_6d7c74c7-9652-4fe6-93c3-667ec676ce1c/oauth-openshift/0.log" Oct 11 11:31:55.268973 master-2 kubenswrapper[4776]: I1011 11:31:55.268904 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-v6dfc_8757af56-20fb-439e-adba-7e4e50378936/assisted-installer-controller/0.log" Oct 11 11:31:55.433484 master-2 kubenswrapper[4776]: I1011 11:31:55.433414 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-kxhjc_7004f3ff-6db8-446d-94c1-1223e975299d/authentication-operator/0.log" Oct 11 11:31:55.447377 master-2 kubenswrapper[4776]: I1011 11:31:55.447304 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-kxhjc_7004f3ff-6db8-446d-94c1-1223e975299d/authentication-operator/1.log" Oct 11 11:31:56.233308 master-2 kubenswrapper[4776]: I1011 11:31:56.233207 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fg6sc/master-2-debug-t7v9k"] Oct 11 11:31:56.234662 master-2 kubenswrapper[4776]: I1011 11:31:56.234618 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" Oct 11 11:31:56.346407 master-2 kubenswrapper[4776]: I1011 11:31:56.346326 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqznf\" (UniqueName: \"kubernetes.io/projected/aeab3fcf-41a2-4b37-933b-392e28759321-kube-api-access-bqznf\") pod \"master-2-debug-t7v9k\" (UID: \"aeab3fcf-41a2-4b37-933b-392e28759321\") " pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" Oct 11 11:31:56.346407 master-2 kubenswrapper[4776]: I1011 11:31:56.346428 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aeab3fcf-41a2-4b37-933b-392e28759321-host\") pod \"master-2-debug-t7v9k\" (UID: \"aeab3fcf-41a2-4b37-933b-392e28759321\") " pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" Oct 11 11:31:56.354213 master-2 kubenswrapper[4776]: I1011 11:31:56.354140 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq"] Oct 11 11:31:56.355738 master-2 kubenswrapper[4776]: I1011 11:31:56.355658 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.382832 master-2 kubenswrapper[4776]: I1011 11:31:56.382653 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq"] Oct 11 11:31:56.448647 master-2 kubenswrapper[4776]: I1011 11:31:56.448560 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aeab3fcf-41a2-4b37-933b-392e28759321-host\") pod \"master-2-debug-t7v9k\" (UID: \"aeab3fcf-41a2-4b37-933b-392e28759321\") " pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" Oct 11 11:31:56.448647 master-2 kubenswrapper[4776]: I1011 11:31:56.448630 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-lib-modules\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.449001 master-2 kubenswrapper[4776]: I1011 11:31:56.448665 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2qf\" (UniqueName: \"kubernetes.io/projected/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-kube-api-access-5l2qf\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.449001 master-2 kubenswrapper[4776]: I1011 11:31:56.448779 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-sys\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.449001 master-2 kubenswrapper[4776]: I1011 11:31:56.448796 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-proc\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.449001 master-2 kubenswrapper[4776]: I1011 11:31:56.448850 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-podres\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.449001 master-2 kubenswrapper[4776]: I1011 11:31:56.448882 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqznf\" (UniqueName: \"kubernetes.io/projected/aeab3fcf-41a2-4b37-933b-392e28759321-kube-api-access-bqznf\") pod \"master-2-debug-t7v9k\" (UID: \"aeab3fcf-41a2-4b37-933b-392e28759321\") " pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" Oct 11 11:31:56.449853 master-2 kubenswrapper[4776]: I1011 11:31:56.449073 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aeab3fcf-41a2-4b37-933b-392e28759321-host\") pod \"master-2-debug-t7v9k\" (UID: \"aeab3fcf-41a2-4b37-933b-392e28759321\") " pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" Oct 11 11:31:56.458609 master-2 kubenswrapper[4776]: I1011 11:31:56.458556 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ddb89f76-57kcw_c8cd90ff-e70c-4837-82c4-0fec67a8a51b/router/3.log" Oct 11 11:31:56.470547 master-2 kubenswrapper[4776]: I1011 11:31:56.470367 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqznf\" (UniqueName: \"kubernetes.io/projected/aeab3fcf-41a2-4b37-933b-392e28759321-kube-api-access-bqznf\") pod \"master-2-debug-t7v9k\" (UID: \"aeab3fcf-41a2-4b37-933b-392e28759321\") " pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" Oct 11 11:31:56.475755 master-2 kubenswrapper[4776]: I1011 11:31:56.473295 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ddb89f76-57kcw_c8cd90ff-e70c-4837-82c4-0fec67a8a51b/router/2.log" Oct 11 11:31:56.552562 master-2 kubenswrapper[4776]: I1011 11:31:56.552377 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" Oct 11 11:31:56.555208 master-2 kubenswrapper[4776]: I1011 11:31:56.555149 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-podres\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.555371 master-2 kubenswrapper[4776]: I1011 11:31:56.555333 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-lib-modules\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.555440 master-2 kubenswrapper[4776]: I1011 11:31:56.555382 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l2qf\" (UniqueName: \"kubernetes.io/projected/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-kube-api-access-5l2qf\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.555440 master-2 kubenswrapper[4776]: I1011 11:31:56.555404 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-podres\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.555532 master-2 kubenswrapper[4776]: I1011 11:31:56.555516 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-sys\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.555591 master-2 kubenswrapper[4776]: I1011 11:31:56.555546 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-proc\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.555819 master-2 kubenswrapper[4776]: I1011 11:31:56.555662 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-lib-modules\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.555819 master-2 kubenswrapper[4776]: I1011 11:31:56.555684 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-sys\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.555819 master-2 kubenswrapper[4776]: I1011 11:31:56.555732 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-proc\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.577861 master-2 kubenswrapper[4776]: I1011 11:31:56.577443 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l2qf\" (UniqueName: \"kubernetes.io/projected/bf79d4cf-cea8-4226-94c2-176d3b90b8cd-kube-api-access-5l2qf\") pod \"perf-node-gather-daemonset-q9grq\" (UID: \"bf79d4cf-cea8-4226-94c2-176d3b90b8cd\") " pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:56.673497 master-2 kubenswrapper[4776]: I1011 11:31:56.673406 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:57.134246 master-2 kubenswrapper[4776]: I1011 11:31:57.132847 4776 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq"] Oct 11 11:31:57.139624 master-2 kubenswrapper[4776]: W1011 11:31:57.139527 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbf79d4cf_cea8_4226_94c2_176d3b90b8cd.slice/crio-7c99e9a39e1a8be403ac04a5f1b4e1a5cd8ff23ee0d0eae14bbec347d1879065 WatchSource:0}: Error finding container 7c99e9a39e1a8be403ac04a5f1b4e1a5cd8ff23ee0d0eae14bbec347d1879065: Status 404 returned error can't find the container with id 7c99e9a39e1a8be403ac04a5f1b4e1a5cd8ff23ee0d0eae14bbec347d1879065 Oct 11 11:31:57.193067 master-2 kubenswrapper[4776]: I1011 11:31:57.192969 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" event={"ID":"aeab3fcf-41a2-4b37-933b-392e28759321","Type":"ContainerStarted","Data":"c9ba21dff31a69bf159412fb03517db0a3a3859ea3054cddee067a2e7d063621"} Oct 11 11:31:57.195448 master-2 kubenswrapper[4776]: I1011 11:31:57.195414 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" event={"ID":"bf79d4cf-cea8-4226-94c2-176d3b90b8cd","Type":"ContainerStarted","Data":"7c99e9a39e1a8be403ac04a5f1b4e1a5cd8ff23ee0d0eae14bbec347d1879065"} Oct 11 11:31:57.318104 master-2 kubenswrapper[4776]: I1011 11:31:57.318040 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-68f4c55ff4-hr9gc_1d346790-931a-4f91-b588-0b6249da0cd0/oauth-apiserver/0.log" Oct 11 11:31:57.344013 master-2 kubenswrapper[4776]: I1011 11:31:57.343956 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-68f4c55ff4-hr9gc_1d346790-931a-4f91-b588-0b6249da0cd0/fix-audit-permissions/0.log" Oct 11 11:31:57.686723 master-2 kubenswrapper[4776]: I1011 11:31:57.683942 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g9nhb_018da26f-14c3-468f-bab0-089a91b3ef26/speaker/0.log" Oct 11 11:31:57.693577 master-2 kubenswrapper[4776]: I1011 11:31:57.693538 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g9nhb_018da26f-14c3-468f-bab0-089a91b3ef26/kube-rbac-proxy/0.log" Oct 11 11:31:58.209317 master-2 kubenswrapper[4776]: I1011 11:31:58.209224 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" event={"ID":"bf79d4cf-cea8-4226-94c2-176d3b90b8cd","Type":"ContainerStarted","Data":"eaeb0f4623d0e0676bca1db8a5bba215cdd84d7ba22bf3ec1dd5d22e89fa372d"} Oct 11 11:31:58.209317 master-2 kubenswrapper[4776]: I1011 11:31:58.209404 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:31:58.237866 master-2 kubenswrapper[4776]: I1011 11:31:58.234993 4776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" podStartSLOduration=2.234973481 podStartE2EDuration="2.234973481s" podCreationTimestamp="2025-10-11 11:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 11:31:58.231618501 +0000 UTC m=+3953.016045210" watchObservedRunningTime="2025-10-11 11:31:58.234973481 +0000 UTC m=+3953.019400190" Oct 11 11:31:58.334732 master-2 kubenswrapper[4776]: I1011 11:31:58.333081 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-7ff449c7c5-cfvjb_6b8dc5b8-3c48-4dba-9992-6e269ca133f1/kube-rbac-proxy/0.log" Oct 11 11:31:58.413953 master-2 kubenswrapper[4776]: I1011 11:31:58.413871 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-7ff449c7c5-cfvjb_6b8dc5b8-3c48-4dba-9992-6e269ca133f1/cluster-autoscaler-operator/0.log" Oct 11 11:31:58.440491 master-2 kubenswrapper[4776]: I1011 11:31:58.440443 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6c8fbf4498-wq4jf_66dee5be-e631-462d-8a2c-51a2031a83a2/cluster-baremetal-operator/0.log" Oct 11 11:31:58.464533 master-2 kubenswrapper[4776]: I1011 11:31:58.464397 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6c8fbf4498-wq4jf_66dee5be-e631-462d-8a2c-51a2031a83a2/baremetal-kube-rbac-proxy/0.log" Oct 11 11:31:58.493829 master-2 kubenswrapper[4776]: I1011 11:31:58.493781 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-84f9cbd5d9-bjntd_7e860f23-9dae-4606-9426-0edec38a332f/control-plane-machine-set-operator/0.log" Oct 11 11:31:58.524366 master-2 kubenswrapper[4776]: I1011 11:31:58.524306 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-9dbb96f7-b88g6_548333d7-2374-4c38-b4fd-45c2bee2ac4e/kube-rbac-proxy/0.log" Oct 11 11:31:58.557560 master-2 kubenswrapper[4776]: I1011 11:31:58.557506 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-9dbb96f7-b88g6_548333d7-2374-4c38-b4fd-45c2bee2ac4e/machine-api-operator/0.log" Oct 11 11:32:02.334999 master-2 kubenswrapper[4776]: I1011 11:32:02.334157 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-5cf49b6487-8d7xr_eba1e82e-9f3e-4273-836e-9407cc394b10/kube-rbac-proxy/0.log" Oct 11 11:32:02.363953 master-2 kubenswrapper[4776]: I1011 11:32:02.363749 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-5cf49b6487-8d7xr_eba1e82e-9f3e-4273-836e-9407cc394b10/cloud-credential-operator/0.log" Oct 11 11:32:04.260988 master-2 kubenswrapper[4776]: I1011 11:32:04.260929 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-55957b47d5-f7vv7_9d362fb9-48e4-4d72-a940-ec6c9c051fac/openshift-config-operator/1.log" Oct 11 11:32:04.262155 master-2 kubenswrapper[4776]: I1011 11:32:04.262016 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-55957b47d5-f7vv7_9d362fb9-48e4-4d72-a940-ec6c9c051fac/openshift-config-operator/0.log" Oct 11 11:32:04.287984 master-2 kubenswrapper[4776]: I1011 11:32:04.287934 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-55957b47d5-f7vv7_9d362fb9-48e4-4d72-a940-ec6c9c051fac/openshift-api/0.log" Oct 11 11:32:06.097330 master-2 kubenswrapper[4776]: I1011 11:32:06.097196 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69f8677c95-z9d9d_722d06e2-c934-4ba0-82e4-51c4b2104851/console/0.log" Oct 11 11:32:06.718994 master-2 kubenswrapper[4776]: I1011 11:32:06.718929 4776 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-fg6sc/perf-node-gather-daemonset-q9grq" Oct 11 11:32:06.843559 master-2 kubenswrapper[4776]: I1011 11:32:06.843489 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-65bb9777fc-bkmsm_31d64616-a514-4ae3-bb6d-d6eb14d9147a/download-server/0.log" Oct 11 11:32:07.302852 master-2 kubenswrapper[4776]: I1011 11:32:07.302781 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" event={"ID":"aeab3fcf-41a2-4b37-933b-392e28759321","Type":"ContainerStarted","Data":"114ec66708dfbedf63a2f066c812b6a651f922f82d46594e74e530dcc4b0af4d"} Oct 11 11:32:08.461620 master-2 kubenswrapper[4776]: I1011 11:32:08.461575 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-56d4b95494-9fbb2_e540333c-4b4d-439e-a82a-cd3a97c95a43/cluster-storage-operator/2.log" Oct 11 11:32:08.465615 master-2 kubenswrapper[4776]: I1011 11:32:08.465572 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-56d4b95494-9fbb2_e540333c-4b4d-439e-a82a-cd3a97c95a43/cluster-storage-operator/1.log" Oct 11 11:32:08.490033 master-2 kubenswrapper[4776]: I1011 11:32:08.489975 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-ddd7d64cd-95l49_b5b27c80-52a3-4747-a128-28952a667faa/snapshot-controller/0.log" Oct 11 11:32:08.554503 master-2 kubenswrapper[4776]: I1011 11:32:08.553920 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-7ff96dd767-vv9w8_a0b806b9-13ff-45fa-afba-5d0c89eac7df/csi-snapshot-controller-operator/0.log" Oct 11 11:32:09.204668 master-2 kubenswrapper[4776]: I1011 11:32:09.204600 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-7769d9677-wh775_893af718-1fec-4b8b-8349-d85f978f4140/dns-operator/0.log" Oct 11 11:32:09.231989 master-2 kubenswrapper[4776]: I1011 11:32:09.231945 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-7769d9677-wh775_893af718-1fec-4b8b-8349-d85f978f4140/kube-rbac-proxy/0.log" Oct 11 11:32:10.015775 master-2 kubenswrapper[4776]: I1011 11:32:10.015514 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-sgvjd_e3f3ba3c-1d27-4529-9ae3-a61f88e50b62/dns/0.log" Oct 11 11:32:10.036499 master-2 kubenswrapper[4776]: I1011 11:32:10.036441 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-sgvjd_e3f3ba3c-1d27-4529-9ae3-a61f88e50b62/kube-rbac-proxy/0.log" Oct 11 11:32:10.287910 master-2 kubenswrapper[4776]: I1011 11:32:10.287784 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-z9trl_0550ab10-d45d-4526-8551-c1ce0b232bbc/dns-node-resolver/0.log" Oct 11 11:32:11.021407 master-2 kubenswrapper[4776]: I1011 11:32:11.021358 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/0.log" Oct 11 11:32:11.058012 master-2 kubenswrapper[4776]: I1011 11:32:11.057949 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-8wc54_a35883b8-6cf5-45d7-a4e3-02c0ac0d91e1/etcd-operator/1.log" Oct 11 11:32:11.730764 master-2 kubenswrapper[4776]: I1011 11:32:11.730708 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-guard-master-2_9314095b-1661-46bd-8e19-2741d9d758fa/guard/0.log" Oct 11 11:32:12.478643 master-2 kubenswrapper[4776]: I1011 11:32:12.478551 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcdctl/0.log" Oct 11 11:32:12.668142 master-2 kubenswrapper[4776]: I1011 11:32:12.668083 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcd/0.log" Oct 11 11:32:12.693371 master-2 kubenswrapper[4776]: I1011 11:32:12.693328 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcd-metrics/0.log" Oct 11 11:32:12.715789 master-2 kubenswrapper[4776]: I1011 11:32:12.715732 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcd-readyz/0.log" Oct 11 11:32:12.740002 master-2 kubenswrapper[4776]: I1011 11:32:12.739853 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcd-rev/0.log" Oct 11 11:32:12.766238 master-2 kubenswrapper[4776]: I1011 11:32:12.766111 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/setup/0.log" Oct 11 11:32:12.788123 master-2 kubenswrapper[4776]: I1011 11:32:12.788081 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcd-ensure-env-vars/0.log" Oct 11 11:32:12.815830 master-2 kubenswrapper[4776]: I1011 11:32:12.815771 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-2_cd7826f9db5842f000a071fd58a1ae79/etcd-resources-copy/0.log" Oct 11 11:32:13.038654 master-2 kubenswrapper[4776]: I1011 11:32:13.038436 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-10-master-2_56e683e1-6c74-4998-ac94-05f58a65965f/installer/0.log" Oct 11 11:32:13.180106 master-2 kubenswrapper[4776]: I1011 11:32:13.179967 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_revision-pruner-10-master-2_c2320cb4-bf2c-4d63-b9c6-5a7461a547e8/pruner/0.log" Oct 11 11:32:14.321752 master-2 kubenswrapper[4776]: I1011 11:32:14.321499 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-6b8674d7ff-mwbsr_b562963f-7112-411a-a64c-3b8eba909c59/cluster-image-registry-operator/0.log" Oct 11 11:32:14.373929 master-2 kubenswrapper[4776]: I1011 11:32:14.373872 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-jl6f8_b1a4fd85-5da5-4697-b524-a68be3d018cf/node-ca/0.log" Oct 11 11:32:15.028572 master-2 kubenswrapper[4776]: I1011 11:32:15.028501 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/2.log" Oct 11 11:32:15.031829 master-2 kubenswrapper[4776]: I1011 11:32:15.031783 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/ingress-operator/3.log" Oct 11 11:32:15.056091 master-2 kubenswrapper[4776]: I1011 11:32:15.055967 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-wf7mj_6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c/kube-rbac-proxy/0.log" Oct 11 11:32:15.907601 master-2 kubenswrapper[4776]: I1011 11:32:15.907539 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-rr7vn_b5880f74-fbfb-498e-9b47-d8d909d240e0/serve-healthcheck-canary/0.log" Oct 11 11:32:16.389289 master-2 kubenswrapper[4776]: I1011 11:32:16.389175 4776 generic.go:334] "Generic (PLEG): container finished" podID="aeab3fcf-41a2-4b37-933b-392e28759321" containerID="114ec66708dfbedf63a2f066c812b6a651f922f82d46594e74e530dcc4b0af4d" exitCode=0 Oct 11 11:32:16.389289 master-2 kubenswrapper[4776]: I1011 11:32:16.389228 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" event={"ID":"aeab3fcf-41a2-4b37-933b-392e28759321","Type":"ContainerDied","Data":"114ec66708dfbedf63a2f066c812b6a651f922f82d46594e74e530dcc4b0af4d"} Oct 11 11:32:16.881247 master-2 kubenswrapper[4776]: I1011 11:32:16.881186 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-7dcf5bd85b-6c2rl_59763d5b-237f-4095-bf52-86bb0154381c/insights-operator/1.log" Oct 11 11:32:16.887050 master-2 kubenswrapper[4776]: I1011 11:32:16.886980 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-7dcf5bd85b-6c2rl_59763d5b-237f-4095-bf52-86bb0154381c/insights-operator/0.log" Oct 11 11:32:17.497608 master-2 kubenswrapper[4776]: I1011 11:32:17.497477 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" Oct 11 11:32:17.540321 master-2 kubenswrapper[4776]: I1011 11:32:17.540237 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fg6sc/master-2-debug-t7v9k"] Oct 11 11:32:17.546988 master-2 kubenswrapper[4776]: I1011 11:32:17.546928 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fg6sc/master-2-debug-t7v9k"] Oct 11 11:32:17.626232 master-2 kubenswrapper[4776]: I1011 11:32:17.626119 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqznf\" (UniqueName: \"kubernetes.io/projected/aeab3fcf-41a2-4b37-933b-392e28759321-kube-api-access-bqznf\") pod \"aeab3fcf-41a2-4b37-933b-392e28759321\" (UID: \"aeab3fcf-41a2-4b37-933b-392e28759321\") " Oct 11 11:32:17.626733 master-2 kubenswrapper[4776]: I1011 11:32:17.626282 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aeab3fcf-41a2-4b37-933b-392e28759321-host\") pod \"aeab3fcf-41a2-4b37-933b-392e28759321\" (UID: \"aeab3fcf-41a2-4b37-933b-392e28759321\") " Oct 11 11:32:17.626733 master-2 kubenswrapper[4776]: I1011 11:32:17.626401 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeab3fcf-41a2-4b37-933b-392e28759321-host" (OuterVolumeSpecName: "host") pod "aeab3fcf-41a2-4b37-933b-392e28759321" (UID: "aeab3fcf-41a2-4b37-933b-392e28759321"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 11:32:17.626942 master-2 kubenswrapper[4776]: I1011 11:32:17.626896 4776 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aeab3fcf-41a2-4b37-933b-392e28759321-host\") on node \"master-2\" DevicePath \"\"" Oct 11 11:32:17.630472 master-2 kubenswrapper[4776]: I1011 11:32:17.630427 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeab3fcf-41a2-4b37-933b-392e28759321-kube-api-access-bqznf" (OuterVolumeSpecName: "kube-api-access-bqznf") pod "aeab3fcf-41a2-4b37-933b-392e28759321" (UID: "aeab3fcf-41a2-4b37-933b-392e28759321"). InnerVolumeSpecName "kube-api-access-bqznf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:32:17.728411 master-2 kubenswrapper[4776]: I1011 11:32:17.728348 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqznf\" (UniqueName: \"kubernetes.io/projected/aeab3fcf-41a2-4b37-933b-392e28759321-kube-api-access-bqznf\") on node \"master-2\" DevicePath \"\"" Oct 11 11:32:18.072386 master-2 kubenswrapper[4776]: I1011 11:32:18.072312 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeab3fcf-41a2-4b37-933b-392e28759321" path="/var/lib/kubelet/pods/aeab3fcf-41a2-4b37-933b-392e28759321/volumes" Oct 11 11:32:18.407938 master-2 kubenswrapper[4776]: I1011 11:32:18.407899 4776 scope.go:117] "RemoveContainer" containerID="114ec66708dfbedf63a2f066c812b6a651f922f82d46594e74e530dcc4b0af4d" Oct 11 11:32:18.408493 master-2 kubenswrapper[4776]: I1011 11:32:18.408476 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/master-2-debug-t7v9k" Oct 11 11:32:18.848708 master-2 kubenswrapper[4776]: I1011 11:32:18.848603 4776 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fg6sc/master-2-debug-z52xq"] Oct 11 11:32:18.849372 master-2 kubenswrapper[4776]: E1011 11:32:18.849316 4776 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeab3fcf-41a2-4b37-933b-392e28759321" containerName="container-00" Oct 11 11:32:18.849372 master-2 kubenswrapper[4776]: I1011 11:32:18.849343 4776 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeab3fcf-41a2-4b37-933b-392e28759321" containerName="container-00" Oct 11 11:32:18.849691 master-2 kubenswrapper[4776]: I1011 11:32:18.849635 4776 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeab3fcf-41a2-4b37-933b-392e28759321" containerName="container-00" Oct 11 11:32:18.850885 master-2 kubenswrapper[4776]: I1011 11:32:18.850842 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/master-2-debug-z52xq" Oct 11 11:32:18.955130 master-2 kubenswrapper[4776]: I1011 11:32:18.955060 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l67q\" (UniqueName: \"kubernetes.io/projected/a93f155d-d266-4c26-a637-78bbea2930c0-kube-api-access-8l67q\") pod \"master-2-debug-z52xq\" (UID: \"a93f155d-d266-4c26-a637-78bbea2930c0\") " pod="openshift-must-gather-fg6sc/master-2-debug-z52xq" Oct 11 11:32:18.955130 master-2 kubenswrapper[4776]: I1011 11:32:18.955141 4776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a93f155d-d266-4c26-a637-78bbea2930c0-host\") pod \"master-2-debug-z52xq\" (UID: \"a93f155d-d266-4c26-a637-78bbea2930c0\") " pod="openshift-must-gather-fg6sc/master-2-debug-z52xq" Oct 11 11:32:19.057353 master-2 kubenswrapper[4776]: I1011 11:32:19.057154 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a93f155d-d266-4c26-a637-78bbea2930c0-host\") pod \"master-2-debug-z52xq\" (UID: \"a93f155d-d266-4c26-a637-78bbea2930c0\") " pod="openshift-must-gather-fg6sc/master-2-debug-z52xq" Oct 11 11:32:19.057353 master-2 kubenswrapper[4776]: I1011 11:32:19.057352 4776 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l67q\" (UniqueName: \"kubernetes.io/projected/a93f155d-d266-4c26-a637-78bbea2930c0-kube-api-access-8l67q\") pod \"master-2-debug-z52xq\" (UID: \"a93f155d-d266-4c26-a637-78bbea2930c0\") " pod="openshift-must-gather-fg6sc/master-2-debug-z52xq" Oct 11 11:32:19.057793 master-2 kubenswrapper[4776]: I1011 11:32:19.057756 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a93f155d-d266-4c26-a637-78bbea2930c0-host\") pod \"master-2-debug-z52xq\" (UID: \"a93f155d-d266-4c26-a637-78bbea2930c0\") " pod="openshift-must-gather-fg6sc/master-2-debug-z52xq" Oct 11 11:32:19.086146 master-2 kubenswrapper[4776]: I1011 11:32:19.086021 4776 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l67q\" (UniqueName: \"kubernetes.io/projected/a93f155d-d266-4c26-a637-78bbea2930c0-kube-api-access-8l67q\") pod \"master-2-debug-z52xq\" (UID: \"a93f155d-d266-4c26-a637-78bbea2930c0\") " pod="openshift-must-gather-fg6sc/master-2-debug-z52xq" Oct 11 11:32:19.176861 master-2 kubenswrapper[4776]: I1011 11:32:19.176805 4776 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/master-2-debug-z52xq" Oct 11 11:32:19.202521 master-2 kubenswrapper[4776]: W1011 11:32:19.202465 4776 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda93f155d_d266_4c26_a637_78bbea2930c0.slice/crio-38caec36b0fcb11b09836b36ebbd0ba689368af9e09447580402218aacc1696d WatchSource:0}: Error finding container 38caec36b0fcb11b09836b36ebbd0ba689368af9e09447580402218aacc1696d: Status 404 returned error can't find the container with id 38caec36b0fcb11b09836b36ebbd0ba689368af9e09447580402218aacc1696d Oct 11 11:32:19.337578 master-2 kubenswrapper[4776]: I1011 11:32:19.337452 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-5b5dd85dcc-h8588_dbaa6ca7-9865-42f6-8030-2decf702caa1/cluster-monitoring-operator/0.log" Oct 11 11:32:19.358590 master-2 kubenswrapper[4776]: I1011 11:32:19.358261 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-57fbd47578-g6s84_2c2bfc6c-87cf-45df-8901-abe788ae6d98/kube-state-metrics/0.log" Oct 11 11:32:19.384745 master-2 kubenswrapper[4776]: I1011 11:32:19.384657 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-57fbd47578-g6s84_2c2bfc6c-87cf-45df-8901-abe788ae6d98/kube-rbac-proxy-main/0.log" Oct 11 11:32:19.406345 master-2 kubenswrapper[4776]: I1011 11:32:19.406285 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-57fbd47578-g6s84_2c2bfc6c-87cf-45df-8901-abe788ae6d98/kube-rbac-proxy-self/0.log" Oct 11 11:32:19.417562 master-2 kubenswrapper[4776]: I1011 11:32:19.417504 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/master-2-debug-z52xq" event={"ID":"a93f155d-d266-4c26-a637-78bbea2930c0","Type":"ContainerStarted","Data":"38caec36b0fcb11b09836b36ebbd0ba689368af9e09447580402218aacc1696d"} Oct 11 11:32:19.498995 master-2 kubenswrapper[4776]: I1011 11:32:19.498936 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-578f8b47b8-5qgnr_d38d167f-f15f-4f7e-8717-46dc61374f4a/monitoring-plugin/0.log" Oct 11 11:32:19.772040 master-2 kubenswrapper[4776]: I1011 11:32:19.771984 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-x7xhm_cb15485f-03bd-4281-8626-f35346cf4b0b/node-exporter/0.log" Oct 11 11:32:19.794154 master-2 kubenswrapper[4776]: I1011 11:32:19.794097 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-x7xhm_cb15485f-03bd-4281-8626-f35346cf4b0b/kube-rbac-proxy/0.log" Oct 11 11:32:19.830979 master-2 kubenswrapper[4776]: I1011 11:32:19.830932 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-x7xhm_cb15485f-03bd-4281-8626-f35346cf4b0b/init-textfile/0.log" Oct 11 11:32:20.456190 master-2 kubenswrapper[4776]: I1011 11:32:20.456075 4776 generic.go:334] "Generic (PLEG): container finished" podID="a93f155d-d266-4c26-a637-78bbea2930c0" containerID="494092c165a595989cfe110879f42089895a5335a151a6b79d3adc338826fa1e" exitCode=1 Oct 11 11:32:20.456190 master-2 kubenswrapper[4776]: I1011 11:32:20.456142 4776 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fg6sc/master-2-debug-z52xq" event={"ID":"a93f155d-d266-4c26-a637-78bbea2930c0","Type":"ContainerDied","Data":"494092c165a595989cfe110879f42089895a5335a151a6b79d3adc338826fa1e"} Oct 11 11:32:20.510321 master-2 kubenswrapper[4776]: I1011 11:32:20.510236 4776 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fg6sc/master-2-debug-z52xq"] Oct 11 11:32:20.516290 master-2 kubenswrapper[4776]: I1011 11:32:20.516235 4776 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fg6sc/master-2-debug-z52xq"] Oct 11 11:32:20.555889 master-2 kubenswrapper[4776]: I1011 11:32:20.555801 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-574d7f8db8-cwbcc_d59f55bb-61cf-47d6-b57b-6b02c1cf3b60/prometheus-operator/0.log" Oct 11 11:32:20.574943 master-2 kubenswrapper[4776]: I1011 11:32:20.574878 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-574d7f8db8-cwbcc_d59f55bb-61cf-47d6-b57b-6b02c1cf3b60/kube-rbac-proxy/0.log" Oct 11 11:32:20.627544 master-2 kubenswrapper[4776]: I1011 11:32:20.627468 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-79d5f95f5c-tf6cq_2ec2ac05-04f0-4170-9423-b405676995ee/prometheus-operator-admission-webhook/0.log" Oct 11 11:32:21.551745 master-2 kubenswrapper[4776]: I1011 11:32:21.551687 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/master-2-debug-z52xq" Oct 11 11:32:21.709438 master-2 kubenswrapper[4776]: I1011 11:32:21.709357 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l67q\" (UniqueName: \"kubernetes.io/projected/a93f155d-d266-4c26-a637-78bbea2930c0-kube-api-access-8l67q\") pod \"a93f155d-d266-4c26-a637-78bbea2930c0\" (UID: \"a93f155d-d266-4c26-a637-78bbea2930c0\") " Oct 11 11:32:21.709660 master-2 kubenswrapper[4776]: I1011 11:32:21.709518 4776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a93f155d-d266-4c26-a637-78bbea2930c0-host\") pod \"a93f155d-d266-4c26-a637-78bbea2930c0\" (UID: \"a93f155d-d266-4c26-a637-78bbea2930c0\") " Oct 11 11:32:21.709950 master-2 kubenswrapper[4776]: I1011 11:32:21.709891 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93f155d-d266-4c26-a637-78bbea2930c0-host" (OuterVolumeSpecName: "host") pod "a93f155d-d266-4c26-a637-78bbea2930c0" (UID: "a93f155d-d266-4c26-a637-78bbea2930c0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 11:32:21.710238 master-2 kubenswrapper[4776]: I1011 11:32:21.710208 4776 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a93f155d-d266-4c26-a637-78bbea2930c0-host\") on node \"master-2\" DevicePath \"\"" Oct 11 11:32:21.713713 master-2 kubenswrapper[4776]: I1011 11:32:21.713664 4776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a93f155d-d266-4c26-a637-78bbea2930c0-kube-api-access-8l67q" (OuterVolumeSpecName: "kube-api-access-8l67q") pod "a93f155d-d266-4c26-a637-78bbea2930c0" (UID: "a93f155d-d266-4c26-a637-78bbea2930c0"). InnerVolumeSpecName "kube-api-access-8l67q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 11:32:21.812452 master-2 kubenswrapper[4776]: I1011 11:32:21.812386 4776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l67q\" (UniqueName: \"kubernetes.io/projected/a93f155d-d266-4c26-a637-78bbea2930c0-kube-api-access-8l67q\") on node \"master-2\" DevicePath \"\"" Oct 11 11:32:22.070720 master-2 kubenswrapper[4776]: I1011 11:32:22.070569 4776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a93f155d-d266-4c26-a637-78bbea2930c0" path="/var/lib/kubelet/pods/a93f155d-d266-4c26-a637-78bbea2930c0/volumes" Oct 11 11:32:22.474329 master-2 kubenswrapper[4776]: I1011 11:32:22.474274 4776 scope.go:117] "RemoveContainer" containerID="494092c165a595989cfe110879f42089895a5335a151a6b79d3adc338826fa1e" Oct 11 11:32:22.474559 master-2 kubenswrapper[4776]: I1011 11:32:22.474323 4776 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fg6sc/master-2-debug-z52xq" Oct 11 11:32:24.584093 master-2 kubenswrapper[4776]: I1011 11:32:24.583984 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/controller/0.log" Oct 11 11:32:25.845274 master-2 kubenswrapper[4776]: I1011 11:32:25.845227 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/frr/0.log" Oct 11 11:32:25.871703 master-2 kubenswrapper[4776]: I1011 11:32:25.871658 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/reloader/0.log" Oct 11 11:32:25.892685 master-2 kubenswrapper[4776]: I1011 11:32:25.892610 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/frr-metrics/0.log" Oct 11 11:32:25.924994 master-2 kubenswrapper[4776]: I1011 11:32:25.924868 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/kube-rbac-proxy/0.log" Oct 11 11:32:25.954023 master-2 kubenswrapper[4776]: I1011 11:32:25.953960 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/kube-rbac-proxy-frr/0.log" Oct 11 11:32:25.983081 master-2 kubenswrapper[4776]: I1011 11:32:25.983019 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/cp-frr-files/0.log" Oct 11 11:32:26.004077 master-2 kubenswrapper[4776]: I1011 11:32:26.004027 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/cp-reloader/0.log" Oct 11 11:32:26.030443 master-2 kubenswrapper[4776]: I1011 11:32:26.030397 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwrzt_a7969839-a9c5-4a06-8472-84032bfb16f1/cp-metrics/0.log" Oct 11 11:32:28.964257 master-2 kubenswrapper[4776]: E1011 11:32:28.964186 4776 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.34.12:34676->192.168.34.12:38029: write tcp 192.168.34.12:34676->192.168.34.12:38029: write: broken pipe Oct 11 11:32:30.148932 master-2 kubenswrapper[4776]: I1011 11:32:30.148861 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g9nhb_018da26f-14c3-468f-bab0-089a91b3ef26/speaker/0.log" Oct 11 11:32:30.176894 master-2 kubenswrapper[4776]: I1011 11:32:30.176850 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g9nhb_018da26f-14c3-468f-bab0-089a91b3ef26/kube-rbac-proxy/0.log" Oct 11 11:32:32.195331 master-2 kubenswrapper[4776]: I1011 11:32:32.195210 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-7866c9bdf4-js8sj_b16a4f10-c724-43cf-acd4-b3f5aa575653/cluster-node-tuning-operator/0.log" Oct 11 11:32:32.228915 master-2 kubenswrapper[4776]: I1011 11:32:32.228857 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-5tqrt_4347a983-767e-44a3-92e8-74386c4e2e82/tuned/0.log" Oct 11 11:32:33.700397 master-2 kubenswrapper[4776]: I1011 11:32:33.700333 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68f5d95b74-9h5mv_6967590c-695e-4e20-964b-0c643abdf367/kube-apiserver-operator/0.log" Oct 11 11:32:33.741053 master-2 kubenswrapper[4776]: I1011 11:32:33.741001 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68f5d95b74-9h5mv_6967590c-695e-4e20-964b-0c643abdf367/kube-apiserver-operator/1.log" Oct 11 11:32:34.495427 master-2 kubenswrapper[4776]: I1011 11:32:34.495376 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-2_d7d02073-00a3-41a2-8ca4-6932819886b8/installer/0.log" Oct 11 11:32:34.640875 master-2 kubenswrapper[4776]: I1011 11:32:34.640799 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-6-master-2_2e6df740-3969-4dd7-8953-2c21514694b8/installer/0.log" Oct 11 11:32:34.729266 master-2 kubenswrapper[4776]: I1011 11:32:34.729216 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-guard-master-2_1a003c5f-2a49-44fb-93a8-7a83319ce8e8/guard/0.log" Oct 11 11:32:36.050914 master-2 kubenswrapper[4776]: I1011 11:32:36.050779 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-2_978811670a28b21932e323b181b31435/kube-apiserver/0.log" Oct 11 11:32:36.072911 master-2 kubenswrapper[4776]: I1011 11:32:36.072846 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-2_978811670a28b21932e323b181b31435/kube-apiserver-cert-syncer/0.log" Oct 11 11:32:36.092347 master-2 kubenswrapper[4776]: I1011 11:32:36.092277 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-2_978811670a28b21932e323b181b31435/kube-apiserver-cert-regeneration-controller/0.log" Oct 11 11:32:36.115308 master-2 kubenswrapper[4776]: I1011 11:32:36.115222 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-2_978811670a28b21932e323b181b31435/kube-apiserver-insecure-readyz/0.log" Oct 11 11:32:36.139652 master-2 kubenswrapper[4776]: I1011 11:32:36.139392 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-2_978811670a28b21932e323b181b31435/kube-apiserver-check-endpoints/0.log" Oct 11 11:32:36.161666 master-2 kubenswrapper[4776]: I1011 11:32:36.161601 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-2_978811670a28b21932e323b181b31435/setup/0.log" Oct 11 11:32:36.260835 master-2 kubenswrapper[4776]: I1011 11:32:36.260695 4776 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_revision-pruner-6-master-2_b0ae11ca-a8d5-4a55-9898-269dfe907446/pruner/0.log"